index
int64 0
20.3k
| text
stringlengths 0
1.3M
| year
stringdate 1987-01-01 00:00:00
2024-01-01 00:00:00
| No
stringlengths 1
4
|
|---|---|---|---|
5,200
|
The Infinite Mixture of Infinite Gaussian Mixtures Halid Z. Yerebakan Department of Computer and Information Science IUPUI Indianapolis, IN 46202 hzyereba@cs.iupui.edu Bartek Rajwa Bindley Bioscience Center Purdue University W. Lafayette, IN 47907 rajwa@cyto.purdue.edu Murat Dundar Department of Computer and Information Science IUPUI Indianapolis, IN 46202 dundar@cs.iupui.edu Abstract Dirichlet process mixture of Gaussians (DPMG) has been used in the literature for clustering and density estimation problems. However, many real-world data exhibit cluster distributions that cannot be captured by a single Gaussian. Modeling such data sets by DPMG creates several extraneous clusters even when clusters are relatively well-defined. Herein, we present the infinite mixture of infinite Gaussian mixtures (I2GMM) for more flexible modeling of data sets with skewed and multi-modal cluster distributions. Instead of using a single Gaussian for each cluster as in the standard DPMG model, the generative model of I2GMM uses a single DPMG for each cluster. The individual DPMGs are linked together through centering of their base distributions at the atoms of a higher level DP prior. Inference is performed by a collapsed Gibbs sampler that also enables partial parallelization. Experimental results on several artificial and real-world data sets suggest the proposed I2GMM model can predict clusters more accurately than existing variational Bayes and Gibbs sampler versions of DPMG. 1 Introduction The traditional approach to fitting a Gaussian mixture model onto the data involves using the wellknown expectation-maximization algorithm to estimate component parameters [7]. The major limitation of this approach is the need to define the number of clusters in advance. Although there are several ways to predict the number of clusters in a data set in an offline manner, these techniques are in general suboptimal as they decouple the two interdependent tasks: predicting the number of clusters and predicting model parameters. Dirichlet process mixture of Gaussians (DPMG), also known as the infinite Gaussian mixture model (IGMM), is a Gaussian mixture model (GMM) with a Dirichlet process (DP) prior defined over mixture components [8]. Unlike traditional mixture modeling, DPMG predicts the number of clusters while simultaneously performing model inference. In the DPMG model the number of clusters can arbitrarily grow to better accommodate data as needed. DPMG in general works well when the clusters are well-defined with Gaussian-like distributions. When the distributions of clusters are heavy-tailed, skewed, or multi-modal multiple mixture components per cluster may be needed for more accurate modeling of cluster data. Since there is no dependency structure in DPMG to asso1 ciate mixture components with clusters, additional mixture components produced during inference are all treated as independent clusters. This results in a suboptimal clustering of underlying data. We propose the infinite mixture of IGMMs (I2GMM) for more accurate clustering of data sets exhibiting skewed and multi-modal cluster distributions. The underlying generative model of I2GMM employs a different DPMG for each cluster data. A dependency structure is imposed across individual DPMGs through centering of their base distibutions at one of the atoms of the higher level DP. This way individual cluster data are modeled by lower level DPs using one DPMG for each cluster and atoms defining the base distributions of individual clusters and cluster proportions are modeled by the higher level DP. Our model allows sharing of the covariance matrices across mixture components of the same DPMG. The data model, which is conjugate to the base distributions of both higher and lower level DPs, makes obtaining closed form solutions of posterior predictive distributions possible. We use a collapsed Gibbs sampler scheme for inference. Each scan of the Gibbs sampler involves two loops. One that iterates over individual data instances to sample component indicator variables and another one that iterates over components to sample cluster indicator variables. Conditioned on the cluster indicator variables, component indicator variables can be sampled in a parallel fashion, which significantly speeds up inference under certain circumstances. 2 Related Work Dependent Dirichlet processes (DDP) have been studied in the literature for modeling collection of distributions that vary in time, in spatial region, in covariate space, or in grouped data settings (images, documents, biological samples). Previous work most related to the current work involves studies that investigate DDP in grouped data settings. Teh et al. uses a hierarchical DP (HDP) prior over the base distributions of individual DP models to introduce a sharing mechanism that allows for sharing of atoms across multiple groups [15]. When each group data is modeled by a different DPMG this allows for sharing of the same mean vector and covariance matrix across multiple groups. Such a dependency may potentially be useful in a multi-group setting. However, when all data are contained in a single group as in the current study sharing the same mixture component across multiple cluster distributions leads to shared mixture components being statistically unidentifiable. The HDP-RE model by Kim & Smyth [10] and transformed DP by Sudderth et al. [14] relaxes the exact sharing imposed by HDP to have a dependency structure between multiple groups that allow for components to share perturbed copies of atoms. Although such a sharing mechanism may be useful for modeling random variations in component parameters across multiple groups, it is not very useful for clustering data sets with skewed and multi-modal distributions. Both HDP-RE and transformed DP still model each group data by a single DPMG and suffer from the same drawbacks as DPMG when clustering data sets with skewed and multi-modal distributions. The nested Dirichlet Pricess (nDP) by Rodriguez et al. [13] is a DP whose base distribution is in turn another DP. This model is introduced for modeling multi-group data sets where groups share not just individual mixture components as in HDP but the entire mixture model defined by a DPMG. nDP can be adapted to single group data sets with multiple clusters but with the restriction that each DPMG is shared only once to ensure identifiability. Such a restriction practically eliminates dependencies across DPMGs modeling different clusters and would not offer clustering property at the group level. Unlike existing work which creates dependencies across multiple DPMG through exact or perturbed sharing of mixture components or through sharing of the entire mixture model, proposed I2GMM model associates each cluster with a distinct atom of the higher level DP through centering of the base distribution of the corresponding DPMG at that atom. Thus, the higher level DP defines metaclusters whereas lower level DPs model actual cluster data. Mixture components associated with the same DPMG have their own mean vectors but share the same covariance matrix. Apart from preserving the conjugacy of the data model covariance sharing across mixture components of the same DPMG allows for identification of clusters that differ in cluster shapes even when they are not well separated by their means. 2 3 Dirichlet Process Mixture Dirichlet process is a distribution over discrete distributions. It is parameterized by a concentration parameter α and a base distribution H denoted by DP(αH). Each probability mass in a sample discrete distribution is called as atom. According to the stick-breaking construction of DP [9], each sample from a DP can be considered as a collection of countably infinite number of atoms. In this representation base distribution is a prior over the locations of the atoms and concentration parameter affects the distribution of the atom weights, i.e., stick lengths. Another popular characterization of DP includes the Chinese restaurant process (CRP) [3] which we utilize during model inference. Discrete nature of its samples makes DP suitable as a prior distribution over mixture weights in mixture models. Although samples from DP are defined by an infinite dimensional discrete distribution, the posterior distribution conditioned on a finite data always uses finite number of mixture components. We denote each data instance by xi ∈Rd where i ∈{1, ..., n}, n is the total number of data instances. For each instance, θi indicates the set of parameters from which the instance is sampled. For the Gaussian data model θi = {µi, Σi} where µi denotes the mean vector and Σi the covariance matrix. The generative model of the Dirichlet Process Gaussian Mixture is given by (1). xi ∼ p(xi|θi) θi ∼ G (1) G ∼ DP(αH) Owing to the discreteness of the distribution G, θi’s corresponding to different instances will not be all distinct. It is this property of DP that offers clustering over θi and in turn over data instances. Choosing H from a family of distributions conjugate to the Gaussian distribution produces a closedform solution for the posterior predictive distribution of DPMG. The bivariate prior over the atoms of G is defined in (2). H = NIW(µ0, Σ0, κ0, m) = N(µ|µ0, Σ κ0 ) × W −1(Σ|Σ0, m) (2) where µ0 is the prior mean and κ0 is a scaling constant that controls the deviation of the mean vectors from the prior mean. The parameter Σ0 is the scaling matrix and m is degrees of freedom. The posterior predictive distribution for a Gaussian data model and NIW prior can be obtained by integrating out µ and Σ analytically. Integrating out µ and Σ leaves us with the component indicator variables ti for each instance xi as the only random variables in the state space. Using the CRP representation of DP, ti’s can be sampled as in (3). p(ti = k|X, t−i) ∝ αp(xi) if k = K + 1 n−i k p(xi|A−i k , ¯x−i k ) if k ≤K (3) where p(xi) and p(xi|Ak, ¯xk) denote the posterior predictive distributions for an empty and occupied component, respectively, both of which are multivariate Student-t distributions. X and t denote the sets of all data instances and their corresponding indicator variables, respectively. nk is the number of data instances in component k. Ak and ¯xk are the scatter matrix and sample mean for component k, respectively. The superscript −i notation indicates the exclusion of the effect of instance i from the corresponding variable. Inference for DPMG can also be performed using the stick-breaking representation of DP with the actual inference performed either by a Gibbs sampler or through variational Bayes [5, 11]. 4 The Infinite Mixture of Infinite Gaussian Mixture Models When modeling data sets containing skewed and multi-modal clusters, DPMG tends to produce multiple components for each cluster. Owing to the single-layer structure of DPMG, no direct associations among different components of the same cluster can be made. As a result of this limitation all components are treated as independent clusters resulting in a situation where the number of clusters are overpredicted and the actual cluster data are split into multiple subclusters. A more flexible model for clustering data sets with skewed and multi-modal clusters can be obtained using a two3 layer generative model as in (4). xi ∼ N(xi|µi, Σj) µi ∼ Gj Gj ∼ DP(αHj) Hj = N(µj, Σj/κ1) (4) (µj, Σj) ∼ G G ∼ DP(γH) H = NIW(µ0, Σ0, κ0, m) In this model, top layer DP generates cluster-specific parameters µj and Σj according to the base distribution H and concentration parameter γ. These parameters in turn define the base distributions Hj of the bottom layer DPs. Since each Hj is representing a different cluster, Hj’s can be considered as meta-clusters from which mixture components of the corresponding cluster are generated. In this model both the number of clusters and the number of mixture components within a cluster can be potentially infinite hence the name I2GMM. The top layer DP models the number of clusters, their sizes, and the base distribution of the bottom layer DPs whereas each bottom layer DP models the number of components in a cluster and their sizes. Allowing atom locations in the bottom layer DPGMs to be different than their corresponding cluster atom provides the flexibility to model clusters that cannot be effectively modeled by a single Gaussian. The scaling parameter κ1 adjusts within cluster scattering of the component mean vectors whereas the scaling parameter κ0 adjusts between cluster scattering of the cluster-specific mean vectors. Expressing both H and Hj’s as functions of Σj not only preserves the conjugacy of the model but also allows for sharing of the same covariance matrix across mixture components of the same cluster. Posterior inference for the proposed model in (4) can be performed by a collapsed Gibbs sampler by iteratively sampling component indicator variables t = {ti}n i=1 of data instances and cluster indicator variables c = {ck}K k=1 of mixture components. When sampling ti we restrict sampling with components whose cluster indicator variables are equal to cti in addition to a new component. The conditional distribution for sampling ti can be expressed by the following equation. p(ti = k|X, t−i, c) ∝ αp(xi) if k = K + 1 n−i k p(xi|A−i k , ¯x−i k , Sck) if k : ck = cti (5) where Sck = {Aℓ, ¯xℓ, nℓ}ℓ:cℓ=ck. When sampling component indicator variables, owing to the dependency among data instances, removing a data instance from a component not only affect the parameters of the components it belongs to but also the corresponding cluster parameters. Technically speaking the parameters of both the component and corresponding cluster has to be updated for exact inference. However, updating cluster parameters for every data instance removed will significantly slow down inference. For practical purposes we only update component parameters and assume that removing a single data instance does not significantly change cluster parameters. The conditional distribution for sampling ck can be expressed by the following equation. p(ck = j|X, t, c−k) ∝ γ Q i:ti=k p(xi) if j = J + 1 mj Q i:ti=k p(xi|Sj) if j ≤J (6) where Sj = {Aℓ, ¯xℓ, nℓ}ℓ:cℓ=j, J is the number of clusters, and mj is the number of mixture components assigned to cluster j. Next, we discuss the derivation of the component-level posterior predictive distributions, i.e., p(xi|A−i k , ¯x−i k , Sck), which can be obtained by evaluating the integral in (7). p(xi|A−i k , ¯x−i k , Sck) = Z Z p(xi|µk, Σck)p(µk, Σck|A−i k , ¯x−i k , Sck)∂µk∂Σck (7) To evaluate the integral in (7) we need the posterior distribution of the component parameters, namely p(µk, Σck|A−i k , ¯x−i k , Sck), which is proportional to p(µk, Σck|A−i k , ¯x−i k , Sck) ∝ p(µk, Σck, A−i k , ¯x−i k |Sck) = p(¯x−i k |µk, Σck)p(A−i k |Σck)p(µk|Σck, Sck)p(Σk|Sck) (8) 4 where p(¯x−i k |µk, Σck) = N µk, (n−i k )−1Σck p(A−i k |Σck) = W Σck, n−i k −1 p(µk|Σck, Sck) = N(¯µ, ¯κ−1Σck) p(Σck|Sck) = W −1 Σ0 + P ℓ:cℓ=ck Aℓ, m + P ℓ:cℓ=ck(nℓ−1) ¯µ = P ℓ:cℓ=ck nℓκ1 (nℓ+κ1) ¯xℓ+κ0µ0 P ℓ:cℓ=ck nℓκ1 (nℓ+κ1) +κ0 ¯κ = (P ℓ:cℓ=ck nℓκ1 (nℓ+κ1) +κ0)κ1 P ℓ:cℓ=ck nℓκ1 (nℓ+κ1) +κ0+κ1 Once we substitute p(µk, Σck|A−i k , ¯x−i k , Sck) into (7) and evaluate the integral we obtain p(xi|A−i k , ¯x−i k , Sck) in the form of a multivariate Student-t distribution. p(xi|A−i k , ¯x−i k , Sck) = stu −t(ˆµ, ˆΣ, v) (9) The location vector (ˆµ), the scale matrix (ˆΣ), and the degrees of freedom (v) are given below. Location vector: ˆµ = n−i k ¯x−i k + ¯κ¯µ n−i k + ¯κ (10) Scale matrix: ˆΣ = Σ0 + P ℓ:cℓ=ck Aℓ+ A−i k + n−i k ¯κ n−i k +¯κ(¯x−i k −¯µ)(¯x−i k −¯µ)T (¯κ+n−i k ) v (¯κ+n−i k +1) (11) Degrees of freedom: v = m + X ℓ:cℓ=ck (nℓ−1) + n−i k −d + 1 (12) The cluster-level posterior predictive distributions can be readily obtained from p(xi|A−i k , ¯x−i k , Sck) by dropping Ak, ¯xk, and nk from (10)-(12). Similarly, posterior predictive distribution for an empty component/cluster can be obtained by dropping Sck from (10)-(12) in addition to Ak, ¯xk, and nk. Thanks to the two-layer structure of the proposed model, the inference for I2GMM can be partially parallelized. Conditioned on the cluster indicator variables, component indicator variables for data instances in the same cluster can be sampled independent of the data instances in other clusters. The amount of actual speed up that can be achieved by parallelization depends on multiple factors including the number of clusters, cluster sizes, and how fast the other loop that iterates over cluster indicator variables can be run. 5 Experiments We evaluate the proposed I2GMM model on five different data sets and compare its performance against three different versions of DPMG in terms of clustering accuracy and run time. 5.1 Data Sets Flower formed by Gaussians: We generated a flower-shaped two-dimensional artificial data set using a different Gaussian mixture model for each of the four different parts (petals, stem, and two leaves) of the flower. Each part is considered as a separate cluster. Although covariance matrices are same for all Gaussian components within a mixture they do differ between mixtures to create clusters of different shapes. Petals are formed by a mixture of nine Gaussians sharing a spherical covariance. Stem is formed by a mixture of four Gaussians sharing a diagonal covariance. Each leaf is formed by a mixture of two Gaussians sharing a full covariance. There are a total of seventeen Gaussian components, four clusters, and 17,000 instances (1000 instances per component) in this data set. Scatter plot of this data set is shown in Fig 1a. Lymphoma: Lymphoma data set is one of the data sets used in the FlowCAP (Flow Cytometry Critical Assessment of Population Identification Methods) 2010 competition [1]. This data set consists 5 of thirty sub-data sets each generated from a lymph node biopsy sample of a patient using a flow cytometer. Flow cytometry is a single-cell screening, analysis, and sorting technology that plays a crucial role in research and clinical immunology, hematology, and oncology. The cellular phenotypes are defined in FC by combinations of morphological features (measured by elastic light scatter) and abundances of surface and intracellular markers revealed by fluorescently labeled antibodies. In the lymphoma data set each of the sub-data set contains thousands of instances with each instance representing a cell by a five-dimensional feature vector. For each sub-data set cell populations are manually gated by experts. Each sub-data has between two to four cell populations, i.e., clusters. Owing to the intrinsic mechanical and optical limitations of a flow cytometer, distributions of cell populations in the FC data end up being heavy-tailed or skewed, which makes their modeling by a single Gaussian highly impractical [12]. Although clusters in this data set are relatively well-defined accurate modeling of cell distributions is a challenge due to skewed nature of distributions. Rare cell populations: This data set is a small subset of one of the data sets used in the FlowCAP 2012 competition [1]. The data set contains about 279,546 instances with each instance characterizing a white blood cell in a six-dimensional feature space. There are three clusters manually labeled by experts. This is an interesting data set for two reasons. First, clusters are highly unbalanced in terms of the number of instances belonging to each cluster. Two of the clusters, which are highly significant for measuring immunological response of the patient, are extremely rare. The ratios of the number of instances available from each of the two rare classes to the total number of instances are 0.0004 and 0.0005, respectively. Second, the third cluster, which contains all cells not belonging to one of the two rare-cell populations, has a distribution that is both skewed and multi-modal making it extremely challenging to recover its distribution as a single cluster. Hyperspectral imagery: This data set is a flightline over a university campus. The hyperspectral data provides image data in 126 spectral bands in the visible and infrared regions. A total of 21,518 pixels from eight different land cover types are manually labeled. Some of the land cover types such as roof tops have multi-modal distributions. Cluster sizes are also relatively unbalanced with pixels belonging to roof tops constituting about one half of the labeled pixels. To reduce run time the dimensionality is reduced by projecting the original data onto its first thirty principal components. The data with reduced dimensionality is used in all experiments. Letter recognition: This is a benchmark data set available through the UCI machine learning repository [4]. There are twenty six well-balanced clusters (one for each letter) in this data set. (a) True Clusters −3 −2 −1 0 1 2 3 −7 −6 −5 −4 −3 −2 −1 0 1 2 (b) I2GMM −4 −3 −2 −1 0 1 2 3 4 −10 −8 −6 −4 −2 0 2 4 (c) VB −4 −3 −2 −1 0 1 2 3 4 −10 −8 −6 −4 −2 0 2 4 (d) KD-VB −3 −2 −1 0 1 2 3 −7 −6 −5 −4 −3 −2 −1 0 1 2 (e) ColGibbs Figure 1: Clusters predicted by I2GMM, VB, KD-VB, and ColGibbs on the flower data set. Black contours in the first figure indicate distributions of individual Gaussian components forming the flower. Each color refers to a different cluster. Points denote data instances. 6 Table 1: Micro and macro F1 scores produced by I2GMM, VB, KD-VB, and ColGibbs on the five data sets. For each data set the first line includes micro F1 scores and the second line macro F1 scores. Numbers in parenthesis indicate standard deviations across ten repetitions. Results for the lyphoma data set are the average of results from thirty sub-data sets. Data set I2GMM I2GMMp VB KD-VB ColGibbs Flower 0.975 (0.032) 0.991 (0.003) 0.640 (0.087) 0.584 0.525 (0.010) 0.982 (0.015) 0.990 (0.002) 0.643 (0.059) 0.639 0.611 (0.009) Lymphoma 0.920 (0.016) 0.922 (0.020) 0.454 (0.056) 0.819 0.634 (0.034) 0.847 (0.021) 0.847 (0.022) 0.509 (0.044) 0.762 0.656 (0.029) Rare classes 0.487 (0.031) 0.493 (0.020) 0.182 (0.015) 0.353 0.234 (0.059) 0.756 (0.012) 0.756 (0.010) 0.441 (0.032) 0.472 0.638 (0.023) Hyperspectral 0.624 (0.017) 0.626 (0.021) 0.433 (0.031) 0.554 0.427 (0.024) 0.667 (0.018) 0.661 (0.012) 0.580 (0.034) 0.380 0.596 (0.020) Letter Recognition 0.459 (0.015) 0.467 (0.017) 0.420 (0.015) 0.267 0.398 (0.018) 0.460 (0.015) 0.467 (0.017) 0.420 (0.015) 0.267 0.399 (0.018) 5.2 Benchmark Models and Evaluation Metric We compare the performance of the proposed I2GMM model with three different versions of DPMG. These include the collapsed Gibbs sampler version (ColGibbs) discussed in Section 3, the variational Bayes version (VB) introduced in [5], and the KD-tree based accelerated variational Bayes version (KD-VB) introduced in [11]. For I2GMM and ColGibbs we used our own implementations developed in C++. For VB and KD-VB we used existing MATLAB R⃝(Natick, MA) implementations 1. In order to see the effect of parallelization over execution times we ran the proposed technique in two modes: parallelized (I2GMMp) and unparallelized (I2GMM). All data sets are scaled to have unit variance for each feature. The ColGibbs model has five free parameters (α, Σ0, m, κ0, µ0), I2GMM model has two more parameters (κ1, γ) than ColGibbs. We use vague priors with α and γ by fixing their value to one. We set m to the minimum feasible value, which is d+2, to achieve maximum degrees of freedom in the shape of the covariance matrices. The prior mean µ0 is set to the mean of the entire data. The scale matrix Σ0 is set to I/s, where I is the identity matrix. This leaves the scaling constant s of Σ0, κ0, and κ1 as the three free parameters. We use s = 150/(d(logd)), κ0 = 0.05, and κ1 = 0.5 in experiments with all five data sets described above. Micro and macro F1 scores are used as performance measures for comparing clustering accuracy of these four techniques. As one-to-many matchings are expected between true and predicted clusters, the F1 score for a true cluster is computed as the maximum of the F1 scores for all predicted clusters. The Gibbs sampler for ColGibbs and I2GMM are run for 1500 sweeps. The first 1000 samples are ignored as burn-in and eleven samples drawn with fifty sweeps apart are saved for final evaluation. We used an approach similar to the one proposed in [6] for matching cluster labels across different samples. The mode of cluster labels computed across ten samples are assigned as the final cluster label for each data instance. ColGibbs and I2GMM use stochastic sampling whereas VB use a random initialization stage. Thus, these three techniques may produce results that vary from one run to other on the same data set. Therefore we repeat each experiment ten times and report average results of ten repetitions for these three techniques. 5.3 Results and Discussion Micro and macro F1 produced by the four techniques on all five data sets are reported in Table 1. On the flower data set I2GMM achieves almost perfect micro and macro F1 scores and correctly predicts the true number of clusters. The other three techniques produce several extraneous clusters which lead to poor F1 scores. Clusters predicted by each of the four techniques are shown in Fig. 1. As expected ColGibbs identify distributions of individual Gaussian components as clusters as opposed to the actual clusters formed by mixtures of Gaussians. The piece-wise linear cluster boundaries 1https://sites.google.com/site/kenichikurihara/academic-software/ variational-dirichlet-process-gaussian-mixture-model 7 Table 2: Execution times for I2GMM, I2GMMp, VB, KD-VB, and ColGibbs in seconds on the five data sets. Numbers in parenthesis indicate standard deviations across ten repetitions. For the lymphoma data set results reported are average run-time per sub-data set. Data set I2GMM I2GMMp VB KD-VB ColGibbs Flower 54 (2) 41 (4) 1 (0.2) 7 59 (1) Lymphoma 119 (4) 85 (4) 51 (10) 3 63 (3) Rare classes 9,738 (349) 5,034 (220) 2171 (569) 16 7,250 (182) Hyperspectral 5,385 (109) 3,456 (174) 582 (156) 2 7,455 (221) Letter Recognition 1545 (63) 953 (26) 122 (22) 12 2,785 (123) obtained by VB and KD-VB, splitting original clusters into multiple subclusters, can be explained by simplistic model assumptions and approximations that characterize variational Bayes algorithms. On the lymphoma data set the proposed I2GMM model achieves an average micro and macro F1 scores of 0.920 and 0.848, respectively. These values are not only significantly higher than corresponding F1 scores produced by the other three techniques but also on par with the best performing techniques in the FlowCAP 2010 competition [2]. Results for thirty individual sub-data sets in the lymphoma data set are available in the supplementary document. A similar trend is also observed with the other three real-world data sets as I2GMM achieves the best F1 score among the four techniques. Between I2GMM and ColGibbs, I2GMM consistently generates less number of clusters across all data sets as expected. Overall, among the three different versions of DPMG that differ in the inference algorithm used, there is no clear consensus across five data sets as to which version predicts clusters more accurately. However, the proposed I2GMM model which extends DPMG to skewed and multi-modal clusters, clearly stands out as the most accurate model on all five data sets. Run time results included in Table 2 favors variational Bayes techniques over the Gibbs samplerbased ones as expected. Despite longer run times, significantly higher F1 scores achieved on data sets with diverse characteristics suggest that I2GMM can be preferred over DPMG for more accurate clustering. Results also suggest that I2GMM can benefit from parallelization. The actual amount of improvement in execution time depend on data characteristics as well as how fast the unparallelized loop can be run. The largest gain by parallelization is obtained on the rare classes data set which offered almost two-fold increase by parallelization on an eight-core workstation. 6 Conclusions We introduced I2GMM for more effective clustering of multivariate data sets containing skewed and multi-modal clusters. The proposed model extends DPMG to introduce dependencies between components and clusters by a two-layer generative model. Unlike standard DPMG where each cluster is modeled by a single Gaussian, I2GMM offers the flexibility to model each cluster data by a mixture of potentially infinite number of components. Results on experiments with real and artificial data sets favor I2GMM over variational Bayes and collapsed Gibbs sampler versions of DPMG in terms of clustering accuracy. Although execution time can be improved by sampling component indicator variables in parallel, the amount of speed up that can be gained is limited with the execution time of the sampling of the cluster indicator variables. As most time consuming part of this task is the sequential computation of likelihoods for data instances, significant gains in execution time can be achieved by parallelizing the computation of likelihoods. I2GMM is implemented in C++. The source files and executables are available on the web. 2 Acknowledgments This research was sponsored by the National Science Foundation (NSF) under Grant Number IIS1252648 (CAREER), by the National Institute of Biomedical Imaging and Bioengineering (NIBIB) under Grant Number 5R21EB015707, and by the PhRMA Foundation (2012 Research Starter Grant in Informatics). The content is solely the responsibility of the authors and does not represent the official views of NSF, NIBIB or PhRMA. 2https://github.com/halidziya/I2GMM 8 References [1] FlowCAP - flow cytometry: Critical assessment of population identification methods. http: //flowcap.flowsite.org/. [2] N. Aghaeepour, G. Finak, FlowCAP Consortium, DREAM Consortium, H. Hoos, T. R. Mosmann, R. Brinkman, R. Gottardo, and R. H. Scheuermann. Critical assessment of automated flow cytometry data analysis techniques. Nature Methods, 10(3):228–238, mar 2013. [3] D. J. Aldous. Exchangeability and related topics. In ´Ecole d’ ´Et´e St Flour 1983, pages 1–198. Springer-Verlag, 1985. Lecture Notes in Math. 1117. [4] K. Bache and M. Lichman. Uci machine learning repository, 2013. [5] D. M. Blei and M. I. Jordan. Variational inference for dirichlet process mixtures. Bayesian Analysis, 1(1):121–144, 2006. [6] A. J. Cron and M. West. Efficient classification-based relabeling in mixture models. The American Statistician, 65:16–20, 2011. PMC3110018. [7] A. Dempster, N. Laird, and D. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, 39(1):1–38, 1977. [8] T. S. Ferguson. A Bayesian analysis of some nonparametric problems. Annals of Statistics, 1(2):209–230, 1973. [9] H. Ishwaran and L. F. James. Gibbs sampling methods for stick-breaking priors. Journal of the American Statistical Association, 96(453):pp. 161–173, 2001. [10] S. Kim and P. Smyth. Hierarchical Dirichlet processes with random effects. In B. Sch¨olkopf, J. C. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19, pages 697–704, Cambridge, MA, 2007. MIT Press. [11] K. Kurihara, M. Welling, and N. Vlassis. Accelerated variational dirichlet process mixtures. In Advances in Neural Information Processing Systems 19. 2002. [12] S. Pyne, X. Hu, K. Wang, E. Rossin, T.-I. Lin, L. M. Maier, C. Baecher-Allan, G. J. McLachlan, P. Tamayo, D. A. Hafler, P. L. De Jager, and J. P. Mesirov. Automated high-dimensional flow cytometric data analysis. Proc Natl Acad Sci U S A, 106(21):8519–24, 2009. [13] A. Rodriguez, D. B. Dunson, and A. E. Gelfand. The nested Dirichlet process. Journal of The American Statistical Association, 103:1131–1154, 2008. [14] E. B. Sudderth, A. B. Torralba, W. T. Freeman, and A. S. Willsky. Describing visual scenes using transformed objects and parts. International Journal of Computer Vision, 77:291–330, 2008. [15] Y. Teh, M. Jordan, M. Beal, and D. Blei. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101(476):1566–1581, 2006. 9
|
2014
|
118
|
5,201
|
Partition-wise Linear Models Hidekazu Oiwa∗ Graduate School of Information Science and Technology The University of Tokyo hidekazu.oiwa@gmail.com Ryohei Fujimaki NEC Laboratories America rfujimaki@nec-labs.com Abstract Region-specific linear models are widely used in practical applications because of their non-linear but highly interpretable model representations. One of the key challenges in their use is non-convexity in simultaneous optimization of regions and region-specific models. This paper proposes novel convex region-specific linear models, which we refer to as partition-wise linear models. Our key ideas are 1) assigning linear models not to regions but to partitions (region-specifiers) and representing region-specific linear models by linear combinations of partitionspecific models, and 2) optimizing regions via partition selection from a large number of given partition candidates by means of convex structured regularizations. In addition to providing initialization-free globally-optimal solutions, our convex formulation makes it possible to derive a generalization bound and to use such advanced optimization techniques as proximal methods and decomposition of the proximal maps for sparsity-inducing regularizations. Experimental results demonstrate that our partition-wise linear models perform better than or are at least competitive with state-of-the-art region-specific or locally linear models. 1 Introduction Among pre-processing methods, data partitioning is one of the most fundamental. In it, an input space is divided into several sub-spaces (regions) and assigned a simple model for each region. In addition to better predictive performance resulting from the non-linear nature that arises from multiple partitions, the regional structure provides a better understanding of data (i.e., interpretability). Region-specific linear models learn both partitioning structures and predictors in each region. Such models vary—from traditional decision/regression trees [1] to more advanced models [2, 3, 4]—depending on their region-specifiers (how they characterize regions), region-specific prediction models, and the objective functions to be optimized. One important challenge that remains in learning these models is the non-convexity that arises from the inter-dependency of optimizing regions and prediction models in individual regions. Most previous work suffers from disadvantages arising from non-convexity, including initialization-dependency (bad local minima) and lack of generalization error analysis. We propose convex region-specific linear models, which are referred to as partition-wise linear models. Our models have two distinguishing characteristics that help avoid the non-convexity problem. Partition-wise Modeling We propose partition-wise linear models as a novel class of regionspecific linear models. Our models divide an input space by means of a small set of partitions1. Each partition possesses one weight vector, and this weight vector is only applied to one side of the divided space. It is trained to represent the local relationship between input vectors and output ∗The work reported here was conducted when the first author was a visiting researcher at NEC Laboratories America. 1In our paper, a region is a sub-space in an input space. Multiple regions do not intersect each other, and, in their entirety, they cover the whole input space. A partition is an indicator function that divides an input space into two parts. 1 values. Region-specific predictors are constructed by linear combinations of these weight vectors. Our partition-wise parameterization enables us to construct convex objective functions. Convex Optimization via Sparse Partition Selection We optimize regions by selecting effective partitions from a large number of given candidates, using convex sparsity-inducing structured regularizations. In other words, we trade continuous region optimization for convexity. We allow partitions to locate only given discrete candidate positions, and are able to derive convex optimization problems. We have developed an efficient algorithm to solve structured-sparse optimization problems, and in it we adopt a proximal method [5, 6] and the decomposition of proximal maps [7]. As a reliable partition-wise linear model, we have developed a global and local residual model that combines one global linear model and a set of partition-wise linear ones. Further, our theoretical analysis gives a generalization bound for this model to evaluate the risk of over-fitting. Our generalization bound analysis indicates that we can increase the number of partition candidates by less than an exponential order with respect to the sample size, which is large enough to achieve good predictive performance in practice. Experimental results have demonstrated that our models perform better than or are at least competitive with state-of-the-art region-specific or locally linear models. 1.1 Related Work Region-specific linear models and locally linear models are the most closely related models to our own. The former category, to which our models belong, assumes one predictor in a specific region and has an advantage in clear model interpretability, while the latter assigns one predictor to every single datum and has an advantage in higher model flexibility. Interpretable models are able to indicate clearly where and how the relationships between inputs and outputs change. Well-known precursors to region-specific linear models are decision/regression trees [1], which use rule-based region-specifiers and constant-valued predictors. Another traditional framework is a hierarchical mixture of experts [8], which is a probabilistic tree-based region-specific model framework. Recently, Local Supervised Learning through Space Partitioning (LSL-SP) has been proposed [3]. LSL-SP utilizes a linear-chain of linear region-specifiers as well as region-specific linear predictors. The highly important advantage of LSL-SP is the upper bound of generalization error analysis via the VC dimension. Additionally, a Cost-Sensitive Tree of Classifiers (CSTC) algorithm has also been developed [4]. It utilizes a tree-based linear localizer and linear predictors. This algorithm’s uniqueness among other region-specific linear models is in its taking “feature utilization cost” into account for test time speed-up. Although the developers’ formulation with sparsity-inducing structured regularization is, in a way, related to ours, their model representations and, more importantly, their motivation (test time speed-up) is different from ours. Fast Local Kernel Support Vector Machines (FaLK-SVMs) represent state-of-the-art locally linear models. FaLK-SVMs produce test-point-specific weight vectors by learning local predictive models from the neighborhoods of individual test points [9]. It aims to reduce prediction time cost by preprocessing for nearest-neighbor calculations and local model sharing, at the cost of initializationindependency. Another advanced locally linear model is that of Locally Linear Support Vector Machines (LLSVMs) [10]. LLSVMs assign linear SVMs to multiple anchor points produced by manifold learning [11, 12] and construct test-point-specific linear predictors according to the weights of anchor points with respect to individual test points. When the manifold learning procedure is initialization-independent, LLSVMs become initial-value-independent because of the convexity of the optimization problem. Similarly, clustered SVMs (CSVMs) [13] assume given data clusters and learn multiple SVMs for individual clusters simultaneously. Although CSVMs are convex and generalization bound analysis has been provided, they cannot optimize regions (clusters). Joes et al. have proposed Local Deep Kernel Learning (LDKL) [2], which adopts an intermediate approach with respect to region-specific and locally linear models. LDKL is a tree-based local kernel classifier in which the kernel defines regions and can be seen as performing region-specification. One main difference from common region-specific linear models is that LDKL changes kernel combination weights for individual test points, and the predictors are locally determined in every single region. Its aim is to speed up kernel SVMs’ prediction while maintaining the non-linear ability. Table 1 summarizes the above described state-of-the-art models in contrast with ours from a number of significant perspectives. Our proposed model uniquely exhibits three properties: joint optimization of regions and region-specific predictors, initialization-independent optimization, and meaningful generalization bound. 2 Table 1: Comparison of region-specific and locally linear models. Ours LSL-SP CSTC LDKL FaLK-SVM LLSVM Region Optimization √ √ √ √ Initialization-independent √ √ Generalization Bound √ √ √ Region Specifiers (Sec. 2.2) Linear Linear Linear Non-Regional Non-Regional 1.2 Notations Scalars and vectors are denoted by lower-case x. Matrices are denoted by upper-case X. An n-th training sample and label are denoted by xn ∈RD and yn, respectively. 2 Partition-wise Linear Models This section explains partition-wise linear models under the assumption that effective partitioning is already fixed. We discuss how to optimize partitions and region-specific linear models in Section 3. 2.1 Framework Figure 1 illustrates the concept of partition-wise linear models. Suppose we have P partitions (red dashed lines) which essentially specify 2P regions. Partition-wise linear models are defined as follows. First, we assign a linear weight vector ap to the p-th partition. This partition has an activeness function, fp, which indicates whether the attached weight vector ap is applied to individual data points or not. For example, in Figure 1, we set the weight vector a1 to be applied to the right-hand side of partition p1. In this case, the corresponding activeness function is defined as f1(x) = 1 when x is in the right-hand side of p1. Second, region-specific predictors (squared regions surrounded by partitions in Figure 1) are defined by a linear combination of active partition-wise weight vectors that are also linear models. p1 p2 p3 p4 a2 a1 a3 a1 + a3 a1 + a3 +a4 a1 + a2 +a3 a4 Figure 1: Concept of Partition-wise Linear Models Let us formally define the partition-wise linear models. We have a set of given activeness functions, f1, . . . , fP , which is denoted in a vector form as f(·) = (f1(·), . . . , fP (·))T . The p-th element fp(x) ∈ {0, 1} indicates whether the attached weight vector ap is applied to x or not. The activeness function f(·) can represent at most 2P regions, and f(x) specifies to which region x belongs. A linear model of an individual region is then represented as PP p=1 fp(·)ap. It is worth noting that partitionwise linear models use P linear weight vectors to represent 2P regions and restrict the number of parameters. The overall predictor g(·) can be denoted as follows: g(x) = X p fp(x) X d adpxd. (1) Let us define A as A = (a1, . . . , aP ). The partition-wise linear model (1) simply acts as a linear model w.r.t. A while it captures the non-linear nature of data (individual regions use different linear models). Such non-linearity originates from the activeness functions fps, which are fundamentally important components in our models. By introducing a convex loss function ℓ(·, ·) (e.g., squared loss for regression, squared hinge or logistic loss for classification), we can represent an objective function of the partition-wise linear models as a convex loss minimization problem as follows: min A X n ℓ(yn, g(xn)) = min A X n ℓ(yn, X p fp(xn) X d adpxnd). (2) Here we give a convex formulation of region-specific linear models under the assumption that a set of partitions is given. In Section 3, we propose a convex optimization algorithm for partitions and regions as a partition selection problem, using sparsity-inducing structured regularization. 3 2.2 Partition Activeness Functions A partition activeness function fp divides the input space into two regions, and a set of activeness functions defines the entire region-structure. Although any function is applicable in principle to being used as a partition activeness function, we prefer as simple a region representation as possible because of our practical motivation of utilizing region-specific linear models (i.e., interpretability is a priority). This paper restricts them to being parallel to the coordinates, e.g., fp(x) = 1 (xi > 2.5) and fp(x) = 0 (otherwise) with respect to the i-th coordinate. Although this “rule-representation” is simpler than others [2, 3] which use dense linear hyperplanes as region-specifiers, our empirical evaluation (Section 5) indicates that our models perform competitively with or even better than those others by appropriately optimizing the simple region-specifiers (partition activeness functions). 2.3 Global and Local Residual Model As a special instance of partition-wise linear models, we here propose a model which we refer to as a global and local residual model. It employs a global linear weight vector a0 in addition to partition-wise linear weights. The predictor model (1) can be rewritten as: g(x) = aT 0 x + X p fp(x) X d adpxd . (3) The global weight vector is active for all data. The integration of the global weight vector enables the model to determine how features affect outputs not only locally but also globally. Let us consider a new partition activeness function f0(x) that always returns to 1 regardless of x. Then, by setting f(·) = (f0(·), f1(·), . . . , fp(·), . . . , fP (·))T and A = (a0, a1, . . . , aP ), the global and local residual model can be represented using the same notation as is used in Section 2.1. Although a0 and ap have no fundamental difference here, they are different in terms of how we regularize them (Section 3.1). 3 Convex Optimization of Regions and Predictors In Section 2, we presented a convex formulation of partition-wise linear models in (2) under the assumption that a set of partition activeness functions was given. This section relaxes this assumption and proposes a convex partition optimization algorithm. 3.1 Region Optimization as Sparse Partition Selection Let us assume that we have been given P +1 partition activeness functions, f0, f1, . . . , fP , and their attached linear weight vectors, a0, a1, . . . , aP , where f0 and a0 are the global activeness function and weight vector, respectively. We formulate the region optimization problem here as partition selection by setting setting most of aps to zero since ap = 0 corresponds to the situation in which the p-th partition does not exist. Formally, we formulate our optimization problem with respect to regions and weight vectors by introducing two types of sparsity-inducing constrains to (2) as follows: min A X n ℓ(yn, g(xn)) s.t. X p∈{1,...,P } 1{ap̸=0} ≤µP , ∥ap∥0 ≤µ0 ∀p. (4) The former constraint restricts the number of effective partitions to at most µP . Note that we do not enforce this sparse partition constraint to the global model a0 so as to be able to determine local trends as residuals from a global trend. The latter constraint restricts the number of effective features of ap to at most µ0. We add this constraint because 1) it is natural to assume only a small number of features are locally effective in practical applications and 2) a sparser model is typically preferred for our purposes because of its better interpretability. 3.2 Convex Optimization via Decomposition of Proximal map 3.2.1 The Tightest Convex Envelope The constraints in (5) are non-convex, and it is very hard to find the global optimum due to the indicator functions and L0 penalties. This makes optimization over a non-convex region a very complicated task, and we therefore apply a convex relaxation. One standard approach to convex relaxation would be a combination of group L1 (the first constraint) and L1 (the second constraint) penalties. Here, however, we consider the tightest convex relaxation of (4) as follows: min A X n ℓ(yn, g(xn)) s.t. P X p=1 ∥ap∥∞≤µP , D X d=1 ∥adp∥∞≤µ0 ∀p. (5) 4 The tightness of (5) is shown in the full version [14]. Through such a convex envelope of constraints, the feasible region becomes convex. Therefore, we can reformulate (5) to the following equivalent problem: min A X n ℓ(yn, g(xn)) + Ω(A) where Ω(A) = λP P X p=1 ∥ap∥∞+ λ0 P X p=0 D X d=1 ∥adp∥∞, (6) where λP and λ0 are regularization weights corresponding to µP and µ0, respectively. We derive an efficient optimization algorithm using a proximal method and the decomposition of proximal maps. 3.2.2 Proximal method and FISTA The proximal method is a standard efficient tool for solving convex optimization problems with non-differential regularizers. It iteratively applies gradient steps and proximal steps to update parameters. This achieves O(1/t) convergence [5] under Lipschitz-continuity of the loss gradient, or even O(1/t2) convergence if an acceleration technique, such as a fast iterative shrinkage thresholding algorithm (FISTA) [6, 15], is incorporated. Let us define A(t) as the weight matrix at the t-th iteration. In the gradient step, the weight vectors are updated to decrease empirical loss through the first-order approximation of loss functions as: A(t+ 1 2 ) = A(t) −η(t) X n ∂A(t)ℓ(yn, g(xn)) , (7) where η(t) is a step size and ∂A(t)ℓ(·, ·) is the gradient of loss functions evaluated at A(t). In the proximal step, we apply regularization to the current solution A(t+ 1 2 ) as follows: A(t+1) = M0(A(t+ 1 2 )) where M0(B) = argmin A 1 2∥A −B∥2 F + η(t)Ω(A) , (8) where ∥· ∥F is the Frobenius norm. Furthermore, we employed FISTA [6] to achieve the faster convergence rate for weakly convex problem and adopted a backtracking rule [6] to avoid the difficulty of calculating appropriate step widths beforehand. Through empirical evaluations as well as theoretical backgrounds, we have confirmed that it significantly improves convergence in learning partition-wise linear models. The detail is written in the full version [14]. 3.2.3 Decomposition of Proximal Map The computational cost of the proximal method depends strongly on the efficiency of solving the proximal step (8). A number of approaches have been developed for improving efficiency, including the minimum-norm-point approach [16] and the networkflow approach [17, 18]. Their computational efficiencies depend strongly on feature and partition size2, however, which makes them inappropriate for our formulation because of potentially large feature and partition sizes. Alternatively, this paper employs the decomposition of proximal maps [7]. The key idea here is to decompose the proximal step into a sequence of sub-problems that are easily solvable. We first introduce two easily-solvable proximal maps as follows: M1(B) = argmin A 1 2∥A −B∥2 F + η(t)λP P X p=1 ∥ap∥∞, (9) M2(B) = argmin A 1 2∥A −B∥2 F + η(t)λ0 P X p=0 D X d=1 |adp| . (10) The theorem below guarantees that the decomposition of the proximal map (8) can be performed. The proof is provided in the full version. Theorem 1 The original problem (8) can be decomposed into a sequence of two easily solvable proximal map problems as follows: A(t+1) = M0(A(t+ 1 2 )) = M2(M1(A(t+ 1 2 ))) . (11) 2For example, the fastest algorithm for the networkflow approach has O(M(B+1) log(M2/(B+1))) time complexity, where B is the number of breakpoints determined by the structure of the graph (B ≤D(P + 1) = O(DP)) and M is the number of nodes, that is P + D(P + 1) = O(DP) [17]. Therefore, the worst computational complexity is O(D2P 2 log DP). 5 The first proximal map (9) is the proximal operator with respect to the L1,∞-regularization. This problem can be decomposed into group-wise sub-problems. Each proximal operator with respect to each group can be computed through a projection on an L1-norm ball (derived from the Moreau decomposition [16]), that is, ap = bp −argmin c ∥c −bp∥2 s.t. ∥c∥1 ≤η(t)λ. This projection problem can be efficiently solved [19]. The second proximal map (10) is a well-known proximal operator with respect to L1-regularization. This problem can be decomposed into element-wise ones and its solution is generated in a closed form through adp = sgn(bdp) max 0, |bdp −η(t)λ0| . These two sub-problems can be easily solved, therefore, we can easily obtain the solution of the original proximal map (8). O(NP + ˆPD + PD log D) is the computational complexity of partition-wise linear models where ˆP is the number of active partitions. The procedure to derive the computational complexity, the implementation to speed up the optimization through warm start, and the summary of the iterative update procedure are written in the full version. 4 Generalization Bound Analysis This section presents the derivation of a generalization error bound for partition-wise linear models and discusses how we can increase the number of partition candidates P over the number of samples N. Our bound analysis is related to that of [20], which gives bounds for general overlapping group Lasso cases, while ours is specifically designed for partition-wise linear models. Let us first derive an empirical Rademacher complexity [21] for a feasible weight space conditioned on (6). We can derive Rademacher complexity for our model using the Lemma below. Its proof is shown in the full version and this result is used to analyze the expected loss bound. Lemma 1 If Ω(A) ≤1 is satisfied and if almost surely ∥x∥∞≤1 with respect to x ∈X, the empirical Rademacher complexity for partition-wise linear models can be bounded as: ℜA(X) = 23/2 √ N 2 + p ln(P + D(P + 1)) . (12) The next theorem shows the generalization bound of the global and local residual model. This bound is straightforwardly derived from Lemma 1 and the discussion of [21]. In [21], it has been shown that the uniform bound on the estimation error can be obtained through the upper bound of Rademacher complexity derived in Lemma 1. By using the uniform bound, the generalization bound of the global and local residual model defined in formula (4) can be derived. Theorem 2 Let us define a set of weights that satisfies Ωgroup(A) ≤1 as A where Ωgroup(A) is as defined in Section 2.5 in [20]. Let a datum (xn, yn) be i.i.d. sampled from a specific data distribution D and let us assume loss functions ℓ(·, ·) to be L-Lipschitz functions with respect to a norm ∥· ∥and its range to be within [0, 1]. Then, for any constant δ ∈(0, 1) and any A ∈A, the following inequality holds with probability at least 1 −δ. E(x,y)∼D [ℓ(y, g(x))] ≤1 N N X n=1 ℓ(yn, g(xn)) + ℜA(X) + r ln 1/δ 2N . (13) This theorem implies how we can increase the number of partition candidates. The third term of the right-hand side is obviously small if N is large. The second term converges to zero with N →∞if the value of P is smaller than o(eN), which is sufficiently large in practice. In summary, we expect to handle a sufficient number of partition candidates for learning with little risk of over fitting. 5 Experiments We conducted two types of experiments: 1) evaluation of how partition-wise linear models perform, on the basis of a simple synthetic dataset and 2) comparisons with state-of-the-art region-specific and locally linear models on the basis of standard classification and regression benchmark datasets. 5.1 Demonstration using Synthetic Dataset We generated a synthetic binary classification dataset as follows. xns were uniformly sampled from a 20-dimensional input space in which each dimension had values between [−1, 1]. The target variables were determined using the XOR rule over the first and second features (the other 18 features 6 were added as noise for prediction purposes.), i.e., if the signs of first feature value and second feature value are the same, y = 1, otherwise y = −1. This is well known as a case in which linear models do not work. For example, L1-regularized logistic regression produced nearly random outputs where the error rate was 0.421. We generated one partition for each feature except for the first feature. Each partition became active if the corresponding feature value was greater than 0.0. Therefore, the number of candidate partitions was 19. We used the logistic regression function for loss functions. Hyper-parameters3 were set as λ0 = 0.01 and λP = 0.001. The algorithm was run in 1, 000 iterations. 𝑎0 = −4.37 0.0 ⋮ 𝑎1 = 10.96 0.0 ⋮ 𝐴𝑓𝑥 𝑇= −4.37 0.0 ⋮ 𝐴𝑓𝑥 𝑇= 6.59 0.0 ⋮ Figure 2: How the global and local residual model classifies XOR data. Red line indicates effective partition; green lines indicate local predictors; red circles indicate samples with y = −1; blue circles indicate samples with y = 1: This model classified XOR data precisely. Figure 2 illustrates results produced by the global and local residual model. The left-hand figure illustrates a learned effective partition (red line) to which the weight vector a1 = (10.96, 0.0, · · · ) was assigned. This weight a1 was only applied to the region above the red line. By combining a1 and the global weight a0, we obtained the piece-wise linear representation shown in the right-hand figure. While it is yet difficult for existing piece-wise linear methods to capture global structures4, our convex formulation makes it possible for the global and local residual model to easily capture the global XOR structures. 5.2 Comparisons using Benchmark Datasets Table 2: Classification and regression datasets. N is the size of data. D is the number of dimensions. P is the number of partitions. CL/RG denotes the type of dataset (CL: Classification/RG: Regression). N D P CL/RG skin 245,057 3 12 CL winequality 6,497 11 44 CL census income 45,222 105 99 CL twitter 140,707 11 44 CL a1a 1,605 113 452 CL breast-cancer 683 10 40 CL internet ad 2,359 1,559 1,558 CL energy heat 768 8 32 RG energy cool 768 8 32 RG abalone 4,177 10 40 RG kinematics 8,192 8 32 RG puma8NH 8,192 8 32 RG bank8FM 8,192 8 32 RG communities 1,994 101 404 RG We next used benchmark datasets to compare our models with other state-of-the-art region-specific ones. In these experiments, we simply generated partition candidates (activeness functions) as follows. For continuous value features, we calculated all 5-quantiles for each feature and generated partitions at each quantile point. Partitions became active if a feature value was greater than the corresponding quantile value. For binary categorical features, we generated two partitions in which one became active when the feature was 1 (yes) and the other became active only when the feature value was 0 (no). We utilized several standard benchmark datasets from UCI datasets (skin, winequality, census income, twitter, internet ad, energy heat, energy cool, communities), libsvm datasets (a1a, breast cancer), and LIACC datasets (abalone, kinematics, puma8NH, bank8FM). Table 2 summarizes specifications for each dataset. 5.2.1 Classification For classification, we compared the global and local residual model (Global/Local) with L1 logistic regression (Linear), LSL-SP with linear discrimination analysis5, LDKL supported by L2regularized hinge loss6, FaLK-SVM with linear kernels7, and C-SVM with RBF kernel8. Note that C-SVM is neither a region-specific nor locally linear classification model; it is, rather, non-linear. We compared it with ours as a reference with respect to a common non-linear classification model. 3We conducted several experiments on other hyper-parameter settings and confirmed that variations in hyper-parameter settings did not significantly affect results. 4For example, a decision tree cannot be used to find a “true” XOR structure since marginal distributions on the first and second features cannot discriminate between positive and negative classes. 5The source code is provided by the author of [3]. 6https://research.microsoft.com/en-us/um/people/manik/code/LDKL/ download.html 7http://disi.unitn.it/˜segata/FaLKM-lib/ 8We used a libsvm package. http://www.csie.ntu.edu.tw/˜cjlin/libsvm/ 7 Table 3: Classification results: error rate (standard deviation). The best performance figure in each dataset is denoted in bold typeface and the second best is denoted in bold italic. Linear Global Local LSL-SP LDKL FaLK-SVM RBF-SVM skin 8.900 (0.174) 0.249 (0.048) 12.481 (8.729) 1.858 (1.012) 0.040 (0.016) 0.229 (0.029) winequality 33.667 (1.988) 23.713 (1.202) 30.878 (1.783) 36.795 (3.198) 28.706 (1.298) 23.898 (1.744) census income 43.972 (0.404) 35.697 (0.453) 35.405 (1.179) 47.229 (2.053) – 45.843 (0.772) twitter 6.964 (0.164) 4.231 (0.090) 8.370 (0.245) 15.557 (11.393) 4.135 (0.149) 9.109 (0.160) a1a 16.563 (2.916) 16.250 (2.219) 20.438 (2.717) 17.063 (1.855) 18.125 (1.398) 16.500 (1.346) breast-cancer 35.000 (4.402) 3.529 (1.883) 3.677 (2.110) 35.000 (4.402) – 33.824 (4.313) internet ad 7.319 (1.302) 2.638 (1.003) 6.383 (1.118) 13.064 (3.601) 3.362 (0.997) 3.447 (0.772) Table 4: Regression results: root mean squared loss (standard deviation). The best performance figure in each dataset is denoted in bold typeface and the second best is denoted in bold italic. Linear Global Local RegTree RBF-SVR energy heat 0.480 (0.047) 0.101 (0.014) 0.050 (0.005) 0.219 (0.017) energy cool 0.501 (0.044) 0.175 (0.018) 0.200 (0.018) 0.221 (0.026) abalone 0.687 (0.024) 0.659 (0.023) 0.727 (0.028) 0.713 (0.025) kinematics 0.766 (0.019) 0.634 (0.022) 0.732 (0.031) 0.347 (0.010) puma8NH 0.793 (0.023) 0.601 (0.017) 0.612 (0.024) 0.571 (0.020) bank8FM 0.255 (0.012) 0.218 (0.009) 0.254 (0.008) 0.202 (0.007) communities 0.586 (0.049) 0.578 (0.040) 0.653 (0.060) 0.618 (0.053) For our models, we used logistic functions for loss functions. The max iteration number was set as 1000, and the algorithm stopped early when the gap in the empirical loss from the previous iteration became lower than 10−9 in 10 consecutive iterations. Hyperparameters9 were optimized through 10-fold cross validation. We fixed the number of regions to 10 in LSL-SP, tree-depth to 3 in LDKL, and neighborhood size to 100 in FaLK-SVM. Table 3 summarizes the classification errors. We observed 1) Global/Local consistently performed well and achieved the best error rates foir four datasets out of seven. 2) LSL-SP performed well for census income and breast-cancer, but did significantly worse than Linear for skin, twitter, and a1a. Similarly, LDKL performed worse than Linear for census income, twitter, a1a and internet ad. This arose partly because of over fitting and partly because of bad local minima. Particularly noteworthy is that the standard deviations in LDKL were much larger than in the others, and the initialization issue would seem to become significant in practice. 3) FaLK-SVM performed well in most cases, but its computational cost was significantly higher than that of others, and it was unable to obtain results for census income and internet ad (we stopped the algorithm after 24 hours running). 5.2.2 Regression For regression, we compared Global/Local with Linear, regression tree10 by CART (RegTree) [1], and epsilon-SVR with RBF kernel11. Target variables were standardized so that their mean was 0 and their variance was 1. Performance was evaluated using the root mean squared loss in the test data. Tree-depth of RegTree and ϵ in RBF-SVR were determined by means of 10-fold cross validation. Other experimental settings were the same as those used in the classification tasks. Table 4 summarizes RMSE values. In classification tasks, Global/Local consistently performed well. For the kinematics, RBF-SVR performed much better than Global/Local, but Global/Local was better than Linear and RegTree in many other datasets. 6 Conclusion We have proposed here a novel convex formulation of region-specific linear models that we refer to as partition-wise linear models. Our approach simultaneously optimizes regions and predictors using sparsity-inducing structured penalties. For the purpose of efficiently solving the optimization problem, we have derived an efficient algorithm based on the decomposition of proximal maps. Thanks to its convexity, our method is free from initialization dependency, and a generalization error bound can be derived. Empirical results demonstrate the superiority of partition-wise linear models over other region-specific and locally linear models. Acknowledgments The majority of the work was done during the internship of the first author at the NEC central research laboratories. 9 λ1, λ2 p in Global/Local,λ1 in Linear, λW , λθ, λθ‘, σ in LDKL, C in FaLK-SVM, and C, γ in RBF-SVM. 10We used a scikit-learn package. http://scikit-learn.org/ 11We used a libsvm package. 8 References [1] Leo Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone. Classification and Regression Trees. Wadsworth, 1984. [2] Cijo Jose, Prasoon Goyal, Parv Aggrwal, and Manik Varma. Local deep kernel learning for efficient non-linear svm prediction. In ICML, pages 486–494, 2013. [3] Joseph Wang and Venkatesh Saligrama. Local supervised learning through space partitioning. In NIPS, pages 91–99, 2012. [4] Zhixiang Xu, Matt Kusner, Minmin Chen, and Kilian Q. Weinberger. Cost-Sensitive Tree of Classifiers. In ICML, pages 133–141, 2013. [5] Paul Tseng. Approximation accuracy, gradient methods, and error bound for structured convex optimization. Mathematical Programming, 125(2):263–295, 2010. [6] Amir Beck and Marc Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183–202, 2009. [7] Yaoliang Yu. On decomposing the proximal map. In NIPS, pages 91–99, 2013. [8] Michael I. Jordan and Robert A. Jacobs. Hierarchical mixtures of experts and the em algorithm. Neural Computation, 6(2):181–214, 1994. [9] Nicola Segata and Enrico Blanzieri. Fast and scalable local kernel machines. Journal of Machine Learning Research, 11:1883–1926, 2010. [10] Lubor Ladicky and Philip H.S. Torr. Locally Linear Support Vector Machines. In ICML, pages 985–992, 2011. [11] Kai Yu, Tong Zhang, and Yihong Gong. Nonlinear learning using local coordinate coding. In NIPS, pages 2223–2231, 2009. [12] Ziming Zhang, Lubor Ladicky, Philip H.S. Torr, and Amir Saffari. Learning anchor planes for classification. In NIPS, pages 1611–1619, 2011. [13] Quanquan Gu and Jiawei Han. Clustered support vector machines. In AISTATS, pages 307– 315, 2013. [14] Hidekazu Oiwa and Ryohei Fujimaki. Partition-wise linear models. CoRR, 2014. [15] Yurii Nesterov. Gradient methods for minimizing composite objective function. Core discussion papers, 2007. [16] Francis R. Bach. Structured sparsity-inducing norms through submodular functions. In NIPS, pages 118–126, 2010. [17] Giorgio Gallo, Michael D. Grigoriadis, and Robert E. Tarjan. A fast parametric maximum flow algorithm and applications. SIAM Journal on Computing, 18(1):30–55, 1989. [18] Kiyohito Nagano and Yoshinobu Kawahara. Structured convex optimization under submodular constraints. In UAI, 2013. [19] John Duchi and Yoram Singer. Efficient online and batch learning using forward backward splitting. Journal of Machine Learning Research, 10:2899–2934, 2009. [20] Andreas Maurer and Massimiliano Pontil. Structured sparsity and generalization. Journal of Machine Learning Research, 13:671–690, 2012. [21] Peter L. Bartlett and Shahar Mendelson. Rademacher and gaussian complexities: risk bounds and structural results. Journal of Machine Learning Research, 3:463–482, 2002. 9
|
2014
|
119
|
5,202
|
Learning to Discover Efficient Mathematical Identities Wojciech Zaremba Dept. of Computer Science Courant Institute New York Unviersity Karol Kurach Google Zurich & Dept. of Computer Science University of Warsaw Rob Fergus Dept. of Computer Science Courant Institute New York Unviersity Abstract In this paper we explore how machine learning techniques can be applied to the discovery of efficient mathematical identities. We introduce an attribute grammar framework for representing symbolic expressions. Given a grammar of math operators, we build trees that combine them in different ways, looking for compositions that are analytically equivalent to a target expression but of lower computational complexity. However, as the space of trees grows exponentially with the complexity of the target expression, brute force search is impractical for all but the simplest of expressions. Consequently, we introduce two novel learning approaches that are able to learn from simpler expressions to guide the tree search. The first of these is a simple n-gram model, the other being a recursive neuralnetwork. We show how these approaches enable us to derive complex identities, beyond reach of brute-force search, or human derivation. 1 Introduction Machine learning approaches have proven highly effective for statistical pattern recognition problems, such as those encountered in speech or vision. However, their use in symbolic settings has been limited. In this paper, we explore how learning can be applied to the discovery of mathematical identities. Specifically, we propose methods for finding computationally efficient versions of a given target expression. That is, finding a new expression which computes an identical result to the target, but has a lower complexity (in time and/or space). We introduce a framework based on attribute grammars [14] that allows symbolic expressions to be expressed as a sequence of grammar rules. Brute-force enumeration of all valid rule combinations allows us to discover efficient versions of the target, including those too intricate to be discovered by human manipulation. But for complex target expressions this strategy quickly becomes intractable, due to the exponential number of combinations that must be explored. In practice, a random search within the grammar tree is used to avoid memory problems, but the chance of finding a matching solution becomes vanishingly small for complex targets. To overcome this limitation, we use machine learning to produce a search strategy for the grammar trees that selectively explores branches likely (under the model) to yield a solution. The training data for the model comes from solutions discovered for simpler target expressions. We investigate several different learning approaches. The first group are n-gram models, which learn pairs, triples etc. of expressions that were part of previously discovered solutions, thus hopefully might be part of the solution for the current target. We also train a recursive neural network (RNN) that operates within the grammar trees. This model is first pretrained to learn a continuous representation for symbolic expressions. Then, using this representation we learn to predict the next grammar rule to add to the current expression to yield an efficient version of the target. Through the use of learning, we are able to dramatically widen the complexity and scope of expressions that can be handled in our framework. We show examples of (i) O n3 target expressions which can be computed in O n2 time (e.g. see Examples 1 & 2), and (ii) cases where naive eval1 uation of the target would require exponential time, but can be computed in O n2 or O n3 time. The majority of these examples are too complex to be found manually or by exhaustive search and, as far as we are aware, are previously undiscovered. All code and evaluation data can be found at https://github.com/kkurach/math_learning. In summary our contributions are: • A novel grammar framework for finding efficient versions of symbolic expressions. • Showing how machine learning techniques can be integrated into this framework, and demonstrating how training models on simpler expressions can help which the discovery of more complex ones. • A novel application of a recursive neural-network to learn a continuous representation of mathematical structures, making the symbolic domain accessible to many other learning approaches. • The discovery of many new mathematical identities which offer a significant reduction in computational complexity for certain expressions. Example 1: Assume we are given matrices A ∈Rn×m, B ∈Rm×p. We wish to compute the target expression: sum(sum(A*B)), i.e. : P n,p AB = Pn i=1 Pm j=1 Pp k=1 Ai,jBj,k which naively takes O(nmp) time. Our framework is able to discover an efficient version of the formula, that computes the same result in O(n(m + p)) time: sum((sum(A, 1) * B)’, 1). Our framework builds grammar trees that explore valid compositions of expressions from the grammar, using a search strategy. In this example, the naive strategy of randomly choosing permissible rules suffices and we can find another tree which matches the target expression in reasonable time. Below, we show trees for (i) the original expression and (ii) the efficient formula which avoids the use of a matrix-matrix multiply operation, hence is efficient to compute. ——————— Example 2: Consider the target expression: sum(sum((A*B)k)), where k = 6. For an expression of this degree, there are 9785 possible grammar trees and the naive strategy used in Example 1 breaks down. We therefore learn a search strategy, training a model on successful trees from simpler expressions, such as those for k = 2, 3, 4, 5. Our learning approaches capture the common structure within the solutions, evident below, so can find an efficient O (nm) expression for this target: k = 2: sum((((((sum(A, 1)) * B) * A) * B)’), 1) k = 3: sum((((((((sum(A, 1)) * B) * A) * B) * A) * B)’), 1) k = 4: sum((((((((((sum(A, 1)) * B) * A) * B) * A) * B) * A) * B)’), 1) k = 5: sum((((((((((((sum(A, 1)) * B) * A) * B) * A) * B) * A) * B) * A) * B)’), 1) k = 6: sum(((((((((((((sum(A, 1) * B) * A) * B) *A) * B) * A) * B)* A) * B) * A) * B)’), 1) 1.1 Related work The problem addressed in this paper overlaps with the areas of theorem proving [5, 9, 11], program induction [18, 28] and probabilistic programming [12, 20]. These domains involve the challenging issues of undecidability, the halting problem, and a massive space of potential computation. However, we limit our domain to computation of polynomials with fixed degree k, where undecidability and the halting problem are not present, and the space of computation is manageable (i.e. it grows exponentially, but not super-exponentially). Symbolic computation engines, such as Maple [6] and Mathematica [27] are capable of simplifying expressions by collecting terms but do not explicitly seek versions of lower complexity. Furthermore, these systems are rule based and do not use learning approaches, the major focus of this paper. In general, there has been very little exploration of statistical machine learning techniques in these fields, one of the few attempts being the recent work of Bridge et al. [4] who use learning to select between different heuristics for 1st order reasoning. In contrast, our approach does not use hand-designed heuristics, instead learning them automatically from the results of simpler expressions. 2 Rule Input Output Computation Complexity Matrix-matrix multiply X ∈Rn×m , Y ∈Rm×p Z ∈Rn×p Z = X * Y O (nmp) Matrix-element multiply X ∈Rn×m , Y ∈Rn×m Z ∈Rn×m Z = X .* Y O (nm) Matrix-vector multiply X ∈Rn×m , Y ∈Rm×1 Z ∈Rn×n Z = X * Y O (nm) Matrix transpose X ∈Rn×m Z ∈Rm×n Z = XT O (nm) Column sum X ∈Rn×m Z ∈Rn×1 Z = sum(X,1) O (nm) Row sum X ∈Rn×m Z ∈R1×m Z = sum(X,2) O (nm) Column repeat X ∈Rn×1 Z ∈Rn×m Z = repmat(X,1,m) O (nm) Row repeat X ∈R1×m Z ∈Rn×m Z = repmat(X,n,1) O (nm) Element repeat X ∈R1×1 Z ∈Rn×m Z = repmat(X,n,m) O (nm) Table 1: The grammar G used in our experiments. The attribute grammar, originally developed in 1968 by Knuth [14] in context of compiler construction, has been successfully used as a tool for design and formal specification. In our work, we apply attribute grammars to a search and optimization problem. This has previously been explored in a range of domains: from well-known algorithmic problems like knapsack packing [19], through bioinformatics [26] to music [10]. However, we are not aware of any previous work related to discovering mathematical formulas using grammars, and learning in such framework. The closest work to ours can be found in [7] which involves searching over the space of algorithms and the grammar attributes also represent computational complexity. Classical techniques in natural language processing make extensive use of grammars, for example to parse sentences and translate between languages. In this paper, we borrow techniques from NLP and apply them to symbolic computation. In particular, we make use of an n-gram model over mathematical operations, inspired by n-gram language models. Recursive neural networks have also been recently used in NLP, for example by Luong et al. [15] and Socher et al. [22, 23], as well as generic knowledge representation Bottou [2]. In particular, Socher et al. [23], apply them to parse trees for sentiment analysis. By contrast, we apply them to trees of symbolic expressions. Our work also has similarities to Bowman [3] who shows that a recursive network can learn simple logical predicates. Our demonstration of continuous embeddings for symbolic expressions has parallels with the embeddings used in NLP for words and sentence structure, for example, Collobert & Weston [8], Mnih & Hinton [17], Turian et al. [25] and Mikolov et al. [16]. 2 Problem Statement Problem Definition: We are given a symbolic target expression T that combines a set of variables V to produce an output O, i.e. O = T(V). We seek an alternate expression S, such that S(V) = T(V), but has lower computational complexity, i.e. O (S) < O (T). In this paper we consider the restricted setting where: (i) T is a homogeneous polynomial of degree k∗, (ii) V contains a single matrix or vector A and (iii) O is a scalar. While these assumptions may seem quite restrictive, they still permit a rich family of expressions for our algorithm to explore. For example, by combining multiple polynomial terms, an efficient Taylor series approximation can be found for expressions involving trigonometric or exponential operators. Regarding (ii), our framework can easily handle multiple variables, e.g. Figure 1, which shows expressions using two matrices, A and B. However, the rest of the paper considers targets based on a single variable. In Section 8, we discuss these restrictions further. Notation: We adopt Matlab-style syntax for expressions. 3 Attribute Grammar We first define an attribute grammar G, which contains a set of mathematical operations, each with an associated complexity (the attribute). Since T contains exclusively polynomials, we use the grammar rules listed in Table 1. Using these rules we can develop trees that combine rules to form expressions involving V, which for the purposes of this paper is a single matrix A. Since we know T involves expressions of degree ∗I.e. It only contains terms of degree k. E.g. ab + a2 + ac is a homogeneous polynomial of degree 2, but a2 + b is not homogeneous (b is of degree 1, but a2 is of degree 2). 3 k, each tree must use A exactly k times. Furthermore, since the output is a scalar, each tree must also compute a scalar quantity. These two constraints limit the depth of each tree. For some targets T whose complexity is only O (() n3), we remove the matrix-matrix multiply rule, thus ensuring that if any solution is found its complexity is at most O (() n2) (see Section 7.2 for more details). Examples of trees are shown in Fig. 1. The search strategy for determining which rules to combine is addressed in Section 6. 4 Representation of Symbolic Expressions We need an efficient way to check if the expression produced by a given tree, or combination of trees (see Section 5), matches T. The conventional approach would be to perform this check symbolically, but this is too slow for our purposes and is not amenable to integration with learning methods. We therefore explore two alternate approaches. 4.1 Numerical Representation In this representation, each expression is represented by its evaluation of a randomly drawn set of N points, where N is large (typically 1000). More precisely, for each variable in V, N different copies are made, each populated with randomly drawn elements. The target expression evaluates each of these copies, producing a scalar value for each, so yielding a vector t of length N which uniquely characterizes T. Formally, tn = T(Vn). We call this numerical vector t the descriptor of the symbolic expression T. The size of the descriptor N, must be sufficiently large to ensure that different expressions are not mapped to the same descriptor. Furthermore, when the descriptors are used in the linear system of Eqn. 5 below, N must also be greater than the number of linear equations. Any expression S formed by the grammar can be used to evaluate each Vn to produce another N-length descriptor vector s, which can then be compared to t. If the two match, then S(V) = T(V). In practice, using floating point values can result in numerical issues that prevent t and s matching, even if the two expressions are equivalent. We therefore use an integer-based descriptor in the form of Zp†, where p is a large prime number. This prevents both rounding issues as well as numerical overflow. 4.2 Learned Representation We now consider how to learn a continuous representation for symbolic expressions, that is learn a projection φ which maps expressions S to l-dimensional vectors: φ(S) →Rl. We use a recursive neural network (RNN) to do this, in a similar fashion to Socher et al. [23] for natural language and Bowman et al. [3] for logical expressions. This potentially allows many symbolic tasks to be performed by machine learning techniques, in the same way that the word-vectors (e.g.[8] and [16]) enable many NLP tasks to be posed a learning problems. We first create a dataset of symbolic expressions, spanning the space of all valid expressions up to degree k. We then group them into clusters of equivalent expressions (using the numerical representation to check for equality), and give each cluster a discrete label 1 . . . C. For example, A, (AT )T might have label 1, and P i P j Ai,j, P j P i Ai,j might have label 2 and so on. For k = 6, the dataset consists of C = 1687 classes, examples of which are show in Fig. 1. Each class is split 80/20 into train/test sets. We then train a recursive neural network (RNN) to classify a grammar tree into one of the C clusters. Instead of representing each grammar rule by its underlying arithmetic, we parameterize it by a weight matrix or tensor (for operations with one or two inputs, respectively) and use this to learn the concept of each operation, as part of the network. A vector a ∈Rl, where l = 30‡ is used to represent each input variable. Working along the grammar tree, each operation in S evolves this vector via matrix/tensor multiplications (preserving its length) until the entire expression is parsed, resulting in a single vector φ(S) of length l, which is passed to the classifier to determine the class of the expression, and hence which other expressions it is equivalent to. Fig. 2 shows this procedure for two different expressions. Consider the first expression S = (A. ∗ A)′ ∗sum(A, 2). The first operation here is .∗, which is implemented in the RNN by taking the †Integers modulo p ‡This was selected by cross-validation to control the capacity of the RNN, since it directly controls the number of parameters in the model. 4 two (identical) vectors a and applies a weight tensor W3 (of size l × l × l, so that the output is also size l), followed by a rectified-linear non-linearity. The output of this stage is this max((W3 ∗ a) ∗a, 0). This vector is presented to the next operation, a matrix transpose, whose output is thus max(W2 ∗max((W3 ∗a) ∗a, 0), 0). Applying the remaining operations produces a final output: φ(S) = max((W4 ∗max(W2 ∗max((W3 ∗a) ∗a, 0), 0)) ∗max(W1 ∗a, 0)). This is presented to a C-way softmax classifier to predict the class of the expression. The weights W are trained using a cross-entropy loss and backpropagation. (((sum((sum((A * (A’)), 1)), 2)) * ((A * (((sum((A’), 1)) * A)’))’)) * A) (sum(((sum((A * (A’)), 2)) * ((sum((A’), 1)) * (A * ((A’) * A)))), 1)) (((sum(A, 1)) * (((sum(A, 2)) * (sum(A, 1)))’)) * (A * ((A’) * A))) ((((sum((sum((A * (A’)), 1)), 2)) * ((sum((A’), 1)) * (A * ((A’) * A))))’)’) ((sum(A, 1)) * (((A’) * (A * ((A’) * ((sum(A, 2)) * (sum(A, 1))))))’)) ((sum((sum((A * (A’)), 1)), 2)) * ((sum((A’), 1)) * (A * ((A’) * A)))) (((sum((sum((A * (A’)), 1)), 2)) * ((sum((A’), 1)) * A)) * ((A’) * A)) (a) Class A ((A’) * ((sum(A, 2)) * ((sum((A’), 1)) * (A * (((sum((A’), 1)) * A)’))))) (sum(((A’) * ((sum(A, 2)) * ((sum((A’), 1)) * (A * ((A’) * A))))), 2)) ((((sum(A, 2)) * ((sum((A’), 1)) * A))’) * (A * (((sum((A’), 1)) * A)’))) (((sum((A’), 1)) * (A * ((A’) * ((sum(A, 2)) * ((sum((A’), 1)) * A)))))’) ((((sum((A’), 1)) * A)’) * ((sum((A’), 1)) * (A * (((sum((A’), 1)) * A)’)))) (((A * ((A’) * ((sum(A, 2)) * ((sum((A’), 1)) * A))))’) * (sum(A, 2))) (((A’) * ((sum(A, 2)) * ((sum((A’), 1)) * A))) * (sum(((A’) * A), 2))) (b) Class B Figure 1: Samples from two classes of degree k = 6 in our dataset of expressions, used to learn a continuous representation of symbolic expressions via an RNN. Each line represents a different expression, but those in the same class are equivalent to one another. (a) (A. ∗A)′ ∗sum(A, 2) . (b) (A′. ∗A′) ∗sum(A, 2) . Figure 2: Our RNN applied to two expressions. The matrix A is represented by a fixed random vector a (of length l = 30). Each operation in the expression applies a different matrix (for single input operations) or tensor (for dual inputs, e.g. matrix-element multiplication) to this vector. After each operation, a rectified-linear non-linearity is applied. The weight matrices/tensors for each operation are shared across different expressions. The final vector is passed to a softmax classifier (not shown) to predict which class they belong to. In this example, both expressions are equivalent, thus should be mapped to the same class. When training the RNN, there are several important details that are crucial to obtaining high classification accuracy: • The weights should be initialized to the identity, plus a small amount of Gaussian noise added to all elements. The identity allows information to flow the full length of the network, up to the classifier regardless of its depth [21]. Without this, the RNN overfits badly, producing test accuracies of ∼1%. • Rectified linear units work much better in this setting than tanh activation functions. • We learn using a curriculum [1], starting with the simplest expressions of low degree and slowly increasing k. • The weight matrix in the softmax classifier has much larger (×100) learning rate than the rest of the layers. This encourages the representation to stay still even when targets are replaced, for example, as we move to harder examples. • As well as updating the weights of the RNN, we also update the initial value of a (i.e we backpropagate to the input also). When the RNN-based representation is employed for identity discovery (see Section 6.3), the vector φ(S) is used directly (i.e. the C-way softmax used in training is removed from the network). 5 Linear Combinations of Trees For simple targets, an expression that matches the target may be contained within a single grammar tree. But more complex expressions typically require a linear combination of expressions from different trees. 5 To handle this, we can use the integer-based descriptors for each tree in a linear system and solve for a match to the target descriptor (if one exists). Given a set of M trees, each with its own integer descriptor vector f, we form an M by N linear system of equations and solve it: Fw = t mod Zp where F = [f1, . . . , fM] holds the tree representations, w is the weighting on each of the trees and t is the target representation. The system is solved using Gaussian elimination, where addition and multiplication is performed modulo p. The number of solutions can vary: (a) there can be no solution, which means that no linear combination of the current set of trees can match the target expression. If all possible trees have been enumerated, then this implies the target expression is outside the scope of the grammar. (b) There can be one or more solutions, meaning that some combination of the current set of trees yields a match to the target expression. 6 Search Strategy So far, we have proposed a grammar which defines the computations that are permitted (like a programming language grammar), but it gives no guidance as to how explore the space of possible expressions. Neither do the representations we introduced help – they simply allow us to determine if an expression matches or not. We now describe how to efficiently explore the space by learning which paths are likely to yield a match. Our framework uses two components: a scheduler, and a strategy. The scheduler is fixed, and traverses space of expressions according to recommendations given by the selected strategy (e.g. “Random” or “n-gram” or “RNN”). The strategy assesses which of the possible grammar rules is likely to lead to a solution, given the current expression. Starting with the variables V (in our case a single element A, or more generally, the elements A, B etc.), at each step the scheduler receives scores for each rule from the strategy and picks the one with the highest score. This continues until the expression reaches degree k and the tree is complete. We then run the linear solver to see if a linear combination of the existing set of trees matches the target. If not, the scheduler starts again with a new tree, initialized with the set of variables V. The n-gram and RNN strategies are learned in an incremental fashion, starting with simple target expressions (i.e. those of low degree k, such as P ij AAT ). Once solutions to these are found, they become training examples used to improve the strategy, needed for tackling harder targets (e.g. P ij AAT A). 6.1 Random Strategy The random strategy involves no learning, thus assigns equal scores to all valid grammar rules, hence the scheduler randomly picks which expression to try at each step. For simple targets, this strategy may succeed as the scheduler may stumble upon a match to the target within a reasonable time-frame. But for complex target expressions of high degree k, the search space is huge and the approach fails. 6.2 n-gram In this strategy, we simply count how often subtrees of depth n occur in solutions to previously solved targets. As the number of different subtrees of depth n is large, the counts become very sparse as n grows. Due to this, we use a weighted linear combination of the score from all depths up to n. We found an effective weighting to be 10k, where k is the depth of the tree. 6.3 Recursive Neural Network Section 4.2 showed how to use an RNN to learn a continuous representation of grammar trees. Recall that the RNN φ maps expressions to continuous vectors: φ(S) →Rl. To build a search strategy from this, we train a softmax layer on top of the RNN to predict which rule should be applied to the current expression (or expressions, since some rules have two inputs), so that we match the target. Formally, we have two current branches b1 and b2 (each corresponding to an expression) and wish to predict the root operation r that joins them (e.g. .∗) from among the valid grammar rules (|r| in total). We first use the previously trained RNN to compute φ(b1) and φ(b2). These are then presented to a |r|-way softmax layer (whose weight matrix U is of size 2l × |r|). If only one branch exists, then b2 is set to a fixed random vector. The training data for U comes from trees that give efficient solutions to targets of lower degree k (i.e. simpler targets). Training of the softmax layer is performed by stochastic gradient descent. We use dropout [13] as the network has a tendency to overfit and repeat exactly the same expressions for the next value of k. Thus, instead of training on exactly φ(b1) and φ(b2), we drop activations as we propagate toward the top of the tree (the same 6 fraction for each depth), which encourages the RNN to capture more local structures. At test time, the probabilities from the softmax become the scores used by the scheduler. 7 Experiments We first show results relating to the learned representation for symbolic expressions (Section 4.2). Then we demonstrate our framework discovering efficient identities. For brevity, the identities discovered are listed in the supplementary material [29]. 7.1 Expression Classification using Learned Representation Table 2 shows the accuracy of the RNN model on expressions of varying degree, ranging from k = 3 to k = 6. The difficulty of the task can be appreciated by looking at the examples in Fig. 1. The low error rate of ≤5%, despite the use of a simple softmax classifier, demonstrates the effectiveness of our learned representation. Degree k = 3 Degree k = 4 Degree k = 5 Degree k = 6 Test accuracy 100% ± 0% 96.9% ± 1.5% 94.7% ± 1.0% 95.3% ± 0.7% Number of classes 12 125 970 1687 Number of expressions 126 1520 13038 24210 Table 2: Accuracy of predictions using our learned symbolic representation (averaged over 10 different initializations). As the degree increases tasks becomes more challenging, because number of classes grows, and computation trees become deeper. However our dataset grows larger too (training uses 80% of examples). 7.2 Efficient Identity Discovery In our experiments we consider 5 different families of expressions, chosen to fall within the scope of our grammar rules: 1. (P AAT)k: A is an Rn×n matrix. The k-th term is P i,j(AAT )⌊k/2⌋for even k and P i,j(AAT )⌊k/2⌋A , for odd k. E.g. for k = 2 : P i,j AAT ; for k = 3 : P i,j AAT A; for k = 4 : P i,j AAT AAT etc. Naive evaluation is O kn3 . 2. (P(A. ∗A)AT)k: A is an Rn×n matrix and let B = A. ∗A. The k-th term is P i,j(BAT )⌊k/2⌋for even k and P i,j(BAT B)⌊k/2⌋, for odd k. E.g. for k = 2 : P i,j(A.∗ A)AT ; for k = 3 : P i,j(A. ∗A)AT (A. ∗A); for k = 4 : P i,j(A. ∗A)AT (A. ∗A)AT etc. Naive evaluation is O kn3 . 3. Symk: Elementary symmetric polynomials. A is a vector in Rn×1. For k = 1 : P i Ai, for k = 2 : P i<j AiAj, for k = 3 : P i<j<k AiAjAk, etc. Naive evaluation is O nk . 4. (RBM-1)k: A is a vector in Rn×1. v is a binary n-vector. The k-th term is: P v∈{0,1}n(vT A)k. Naive evaluation is O (2n). 5. (RBM-2)k: Taylor series terms for the partition function of an RBM. A is a matrix in Rn×n. v and h are a binary n-vectors. The k-th term is P v∈{0,1}n,h∈{0,1}n(vT Ah)k. Naive evaluation is O 22n . Note that (i) for all families, the expressions yield a scalar output; (ii) the families are ordered in rough order of “difficulty”; (iii) we are not aware of any previous exploration of these expressions, except for Symk, which is well studied [24]. For the (P AAT)k and (P(A. ∗A)AT)k families we remove the matrix-multiply rule from the grammar, thus ensuring that if any solution is found it will be efficient since the remaining rules are at most O kn2 , rather than O kn3 . The other families use the full grammar, given in Table 1. However, the limited set of rules means that if any solution is found, it can at most be O n3 , rather than exponential in n, as the naive evaluations would be. For each family, we apply our framework, using the three different search strategies introduced in Section 6. For each run we impose a fixed cut-off time of 10 minutes§ beyond which we terminate the search. At each value of k, we repeat the experiments 10 times with different random initializations and count the number of runs that find an efficient solution. Any non-zero count is deemed a success, since each identity only needs to be discovered once. However, in Fig. 3, we show the fraction of successful runs, which gives a sense of how quickly the identity was found. §Running on a 3Ghz 16-core Intel Xeon. Changing the cut-off has little effect on the plots, since the search space grows exponentially fast. 7 We start with k = 2 and increase up to k = 15, using the solutions from previous values of k as training data for the current degree. The search space quickly grows with k, as shown in Table 3. Fig. 3 shows results for four of the families. We use n-grams for n = 1 . . . 5, as well as the RNN with two different dropout rates (0.125 and 0.3). The learning approaches generally do much better than the random strategy for large values of k, with the 3-gram, 4-gram and 5-gram models outperforming the RNN. For the first two families, the 3-gram model reliably finds solutions. These solutions involve repetition of a local patterns (e.g. Example 2), which can easily be captured with n-gram models. However, patterns that don’t have a simple repetitive structure are much more difficult to generalize. The (RBM-2)k family is the most challenging, involving a double exponential sum, and the solutions have highly complex trees (see supplementary material [29]). In this case, none of our approaches performed better than the random strategy and no solutions were discovered for k > 5. However, the k = 5 solution was found by the RNN consistently faster than the random strategy (100 ± 12 vs 438 ± 77 secs). 2 3 4 5 6 7 8 9 10 11 12 13 14 15 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 k p(Success) ( (AA T )) k RNN0.3 RNN0.13 1−gram 2−gram 3−gram 4−gram 5−gram Random 2 3 4 5 6 7 8 9 10 11 12 13 14 15 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 k p(Success) ( (A. * A)AT )) k RNN0.3 RNN0.13 1−gram 2−gram 3−gram 4−gram 5−gram Random 2 3 4 5 6 7 8 9 10 11 12 13 14 15 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 k p(Success) Sym k RNN0.3 RNN0.13 1−gram 2−gram 3−gram 4−gram 5−gram Random 2 3 4 5 6 7 8 9 10 11 12 13 14 15 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 k p(Success) (RBM-1) k RNN0.3 RNN0.13 1−gram 2−gram 3−gram 4−gram 5−gram Random Figure 3: Evaluation on four different families of expressions. As the degree k increases, we see that the random strategy consistently fails but the learning approaches can still find solutions (i.e. p(Success) is non-zero). Best viewed in electronic form. k = 2 k = 3 k = 4 k = 5 k = 6 k = 7 and higher # Terms ≤O n2 39 171 687 2628 9785 Out of memory # Terms ≤O n3 41 187 790 3197 10k+ Table 3: The number of possible expressions for different degrees k. 8 Discussion We have introduced a framework based on a grammar of symbolic operations for discovering mathematical identities. Through the novel application of learning methods, we have shown how the exploration of the search space can be learned from previously successful solutions to simpler expressions. This allows us to discover complex expressions that random or brute-force strategies cannot find (the identities are given in the supplementary material [29]). Some of the families considered in this paper are close to expressions often encountered in machine learning. For example, dropout involves an exponential sum over binary masks, which is related to the RBM-1 family. Also, the partition function of an RBM can be approximated by the RBM-2 family. Hence the identities we have discovered could potentially be used to give a closed-form version of dropout, or compute the RBM partition function efficiently (i.e. in polynomial time). Additionally, the automatic nature of our system naturally lends itself to integration with compilers, or other optimization tools, where it could replace computations with efficient versions thereof. Our framework could potentially be applied to more general settings, to discover novel formulae in broader areas of mathematics. To realize this, additional grammar rules, e.g. involving recursion or trigonometric functions would be needed. However, this would require a more complex scheduler to determine when to terminate a given grammar tree. Also, it is surprising that a recursive neural network can generate an effective continuous representation for symbolic expressions. This could have broad applicability in allowing machine learning tools to be applied to symbolic computation. The problem addressed in this paper involves discrete search within a combinatorially large space – a core problem with AI. Our successful use of machine learning to guide the search gives hope that similar techniques might be effective in other AI tasks where combinatorial explosions are encountered. Acknowledgements The authors would like to thank Facebook and Microsoft Research for their support. 8 References [1] Y. Bengio, J. Louradour, R. Collobert, and J. Weston. Curriculum learning. In ICML, 2009. [2] L. Bottou. From machine learning to machine reasoning. Machine Learning, 94(2):133–149, 2014. [3] S. R. Bowman. Can recursive neural tensor networks learn logical reasoning? arXiv preprint arXiv:1312.6192, 2013. [4] J. P. Bridge, S. B. Holden, and L. C. Paulson. Machine learning for first-order theorem proving. Journal of Automated Reasoning, 53:141–172, August 2014. [5] C.-L. Chang. Symbolic logic and mechanical theorem proving. Academic Press, 1973. [6] B. W. Char, K. O. Geddes, G. H. Gonnet, B. L. Leong, M. B. Monagan, and S. M. Watt. Maple V library reference manual, volume 199. Springer-verlag New York, 1991. [7] G. Cheung and S. McCanne. An attribute grammar based framework for machine-dependent computational optimization of media processing algorithms. In ICIP, volume 2, pages 797–801. IEEE, 1999. [8] R. Collobert and J. Weston. A unified architecture for natural language processing: deep neural networks with multitask learning. In ICML, 2008. [9] S. A. Cook. The complexity of theorem-proving procedures. In Proceedings of the third annual ACM symposium on Theory of computing, pages 151–158. ACM, 1971. [10] M. Desainte-Catherine and K. Barbar. Using attribute grammars to find solutions for musical equational programs. ACM SIGPLAN Notices, 29(9):56–63, 1994. [11] M. Fitting. First-order logic and automated theorem proving. Springer, 1996. [12] N. Goodman, V. Mansinghka, D. Roy, K. Bonawitz, and D. Tarlow. Church: a language for generative models. arXiv:1206.3255, 2012. [13] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv:1207.0580, 2012. [14] D. E. Knuth. Semantics of context-free languages. Mathematical systems theory, 2(2):127–145, 1968. [15] M.-T. Luong, R. Socher, and C. D. Manning. Better word representations with recursive neural networks for morphology. In CoNLL, 2013. [16] T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efficient estimation of word representations in vector space. arXiv:1301.3781, 2013. [17] A. Mnih and G. E. Hinton. A scalable hierarchical distributed language model. In NIPS, 2009. [18] P. Nordin. Evolutionary program induction of binary machine code and its applications. Krehl Munster, 1997. [19] M. ONeill, R. Cleary, and N. Nikolov. Solving knapsack problems with attribute grammars. In Proceedings of the Third Grammatical Evolution Workshop (GEWS04). Citeseer, 2004. [20] A. Pfeffer. Practical probabilistic programming. In Inductive Logic Programming, pages 2–3. Springer, 2011. [21] A. M. Saxe, J. L. McClelland, and S. Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv:1312.6120, 2013. [22] R. Socher, C. D. Manning, and A. Y. Ng. Learning continuous phrase representations and syntactic parsing with recursive neural networks. Proceedings of the NIPS-2010 Deep Learning and Unsupervised Feature Learning Workshop, pages 1–9, 2010. [23] R. Socher, A. Perelygin, J. Y. Wu, J. Chuang, C. D. Manning, A. Y. Ng, and C. P. Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP, 2013. [24] R. P. Stanley. Enumerative combinatorics. Number 49. Cambridge university press, 2011. [25] J. Turian, L. Ratinov, and Y. Bengio. Word representations: a simple and general method for semisupervised learning. In ACL, 2010. [26] J. Waldisp¨uhl, B. Behzadi, and J.-M. Steyaert. An approximate matching algorithm for finding (sub-) optimal sequences in s-attributed grammars. Bioinformatics, 18(suppl 2):S250–S259, 2002. [27] S. Wolfram. The mathematica book, volume 221. Wolfram Media Champaign, Illinois, 1996. [28] M. L. Wong and K. S. Leung. Evolutionary program induction directed by logic grammars. Evolutionary Computation, 5(2):143–180, 1997. [29] W. Zaremba, K. Kurach, and R. Fergus. Learning to discover efficient mathematical identities. arXiv preprint arXiv:1406.1584 (http://arxiv.org/abs/1406.1584), 2014. 9
|
2014
|
12
|
5,203
|
Convex Deep Learning via Normalized Kernels ¨Ozlem Aslan Dept of Computing Science University of Alberta, Canada ozlem@cs.ualberta.ca Xinhua Zhang Machine Learning Group NICTA and ANU xizhang@nicta.com.au Dale Schuurmans Dept of Computing Science University of Alberta, Canada dale@cs.ualberta.ca Abstract Deep learning has been a long standing pursuit in machine learning, which until recently was hampered by unreliable training methods before the discovery of improved heuristics for embedded layer training. A complementary research strategy is to develop alternative modeling architectures that admit efficient training methods while expanding the range of representable structures toward deep models. In this paper, we develop a new architecture for nested nonlinearities that allows arbitrarily deep compositions to be trained to global optimality. The approach admits both parametric and nonparametric forms through the use of normalized kernels to represent each latent layer. The outcome is a fully convex formulation that is able to capture compositions of trainable nonlinear layers to arbitrary depth. 1 Introduction Deep learning has recently achieved significant advances in several areas of perceptual computing, including speech recognition [1], image analysis and object detection [2, 3], and natural language processing [4]. The automated acquisition of representations is motivated by the observation that appropriate features make any learning problem easy, whereas poor features hamper learning. Given the practical significance of feature engineering, automated methods for feature discovery offer an important tool for applied machine learning. Ideally, automatically acquired features capture simple but salient aspects of the input distribution, upon which subsequent feature discovery can compose increasingly abstract and invariant aspects [5]; an intuition that appears to be well supported by recent empirical evidence [6]. Unfortunately, deep architectures are notoriously difficult to train and, until recently, required significant experience to manage appropriately [7, 8]. Beyond well known problems like local minima [9], deep training landscapes also exhibit plateaus [10] that arise from credit assignment problems in backpropagation. An intuitive understanding of the optimization landscape and careful initialization both appear to be essential aspects of obtaining successful training [11]. Nevertheless, the development of recent training heuristics has improved the quality of feature discovery at lower levels in deep architectures. These advances began with the idea of bottom-up, stage-wise unsupervised training of latent layers [12, 13] (“pre-training”), and progressed to more recent ideas like dropout [14]. Despite the resulting empirical success, however, such advances occur in the context of a problem that is known to be NP-hard in the worst case (even to approximate) [15], hence there is no guarantee that worst case versus “typical” behavior will not show up in any particular problem. Given the recent success of deep learning, it is no surprise that there has been growing interest in gaining a deeper theoretical understanding. One key motivation of recent theoretical work has been to ground deep learning on a well understood computational foundation. For example, [16] demonstrates that polynomial time (high probability) identification of an optimal deep architecture can be achieved by restricting weights to bounded random variates and considering hard-threshold generative gates. Other recent work [17] considers a sum-product formulation [18], where guarantees can be made about the efficient recovery of an approximately optimal polynomial basis. Although these 1 treatments do not cover the specific models that have been responsible for state of the art results, they do provide insight into the computational structure of deep learning. The focus of this paper is on kernel-based approaches to deep learning, which offer a potentially easier path to achieving a simple computational understanding. Kernels [19] have had a significant impact in machine learning, partly because they offer flexible modeling capability without sacrificing convexity in common training scenarios [20]. Given the convexity of the resulting training formulations, suboptimal local minima and plateaus are eliminated while reliable computational procedures are widely available. A common misconception about kernel methods is that they are inherently “shallow” [5], but depth is an aspect of how such methods are used and not an intrinsic property. For example, [21] demonstrates how nested compositions of kernels can be incorporated in a convex training formulation, which can be interpreted as learning over a (fixed) composition of hidden layers with infinite features. Other work has formulated adaptive learning of nested kernels, albeit by sacrificing convexity [22]. More recently, [23, 24] has considered learning kernel representations of latent clusters, achieving convex formulations under some relaxations. Finally, [25] demonstrated that an adaptive hidden layer could be expressed as the problem of learning a latent kernel between given input and output kernels within a jointly convex formulation. Although these works show clearly how latent kernel learning can be formulated, convex models have remained restricted to a single adaptive layer, with no clear paths suggested for a multi-layer extension. In this paper, we develop a convex formulation of multi-layer learning that allows multiple latent kernels to be connected through nonlinear conditional losses. In particular, each pair of successive layers is connected by a prediction loss that is jointly convex in the adjacent kernels, while expressing a non-trivial, non-linear mapping between layers that supports multi-factor latent representations. The resulting formulation significantly extends previous convex models, which have only been able to train a single adaptive kernel while maintaining a convex training objective. Additional algorithmic development yields an approach with improved scaling properties over previous approaches, although not yet at the level of current deep learning methods. We believe the result is the first fully convex training formulation of a deep learning architecture with adaptive hidden layers, which demonstrates some useful potential in empirical investigations. 2 Background Figure 1: Multi-layer conditional models To begin, consider a multi-layer conditional model where the input xi is an n dimensional feature vector and the output yi ∈{0, 1}m is a multi-label target vector over m labels. For concreteness, consider a three-layer model (Figure 1). Here, the output of the first hidden layer is determined by multiplying the input, xi, with a weight matrix W ∈Rh×n and passing the result through a nonlinear transfer σ1, yielding φi = σ1(Wxi). Then, the output of the second layer is determined by multiplying the first layer output, φi, with a second weight matrix U ∈Rh′×h and passing the result through a nonlinear transfer σ2, yielding θi = σ2(Uφi), etc. The final output is then determined via ˆyi = σ3(V θi), for V ∈Rm×h′. For simplicity, we will set h′ = h. The goal of training is to find the weight matrices, W, U, and V , that minimize a training objective defined over the training data (with regularization). In particular, we assume the availability of t training examples {(xi, yi)}t i=1, and denote the feature matrix X := (x1, . . . , xt) ∈Rn×t and the label matrix Y := (y1, . . . , yt) ∈Rm×t respectively. One of the key challenges for training arises from the fact that the latent variables Φ := (φ1, . . . , φt) and Θ := (θ1, . . . , θt) are unobserved. To introduce our main development, we begin with a reconstruction of [25], which proposed a convex formulation of a simpler two-layer model. Although the techniques proposed in that work are intrinsically restricted to two layers, we will eventually show how this barrier can be surpassed through the introduction of a new tool—normalized output kernels. However, we first need to provide a more general treatment of the three main obstacles to obtaining a convex training formulation for multi-layer architectures like Figure 1. 2.1 First Obstacle: Nonlinear Transfers The first key obstacle arises from the presence of the transfer functions, σi, which provide the essential nonlinearity of the model. In classical examples, such as auto-encoders and feed-forward neural 2 networks, an explicit form for σi is prescribed, e.g. a step or sigmoid function. Unfortunately, the imposition of a nonlinear transfer in any deterministic model imposes highly non-convex constraints of the form: φi =σ1(Wxi). This problem is alleviated in nondeterministic models like probabilistic networks (PFN) [26] and restricted Boltzman machines (RBMs) [12], where the nonlinear relationship between the output (e.g. φi) and the linear pre-image (e.g. Wxi) is only softly enforced via a nonlinear loss L that measures their discrepancy (see Figure 1). Such an approach was adopted by [25], where the values of the hidden layer responses (e.g. φi) were treated as independent variables whose values are to be optimized in conjunction with the weights. In the present case, if one similarly optimizes rather than marginalizes over hidden layer values, Φ and Θ (i.e. Viterbi style training), a generalized training objective for a multi-layer architecture (Figure 1) can be expressed: min W,U,V,Φ,Θ L1(WX, Φ) + 1 2 ∥W∥2 + L2(UΦ, Θ) + 1 2 ∥U∥2 + L3(V Θ, Y ) + 1 2 ∥V ∥2 . 1 (1) The nonlinear loss L1 bridges the nonlinearity introduced by σ1, and L2 bridges the nonlinearity introduced by σ2, etc. Importantly, these losses, albeit nonlinear, can be chosen to be convex in their first argument; for example, as in standard models like PFNs and RBMs (implicitly). In addition to these exponential family models, which have traditionally been the focus of deep learning research, continuous latent variable models have also been considered, e.g. rectified linear model [27] and the exponential family harmonium. In this paper, like [25], we will use large-margin losses which offer additional sparsity and simplifications. Unfortunately, even though the overall objective (1) is convex in the weight matrices (W, U, V ) given (Φ, Θ), it is not jointly convex in all participating variables due to the interaction between the latent variables (Φ, Θ) and the weight matrices (W, U, V ). 2.2 Second Obstacle: Bilinear Interaction Therefore, the second key obstacle arises from the bilinear interaction between the latent variables and weight matrices in (1). To overcome this obstacle, consider a single connecting layer, which consists of an input matrix (e.g. Φ) and output matrix (e.g. Θ) and associated weight matrix (e.g. U): min U L(UΦ, Θ) + 1 2 ∥U∥2 . (2) By the representer theorem, it follows that the optimal U can be expressed as U = AΦ′ for some A ∈Rm×t. Denote the linear response Z = UΦ = AΦ′Φ = AK where K = Φ′Φ is the input kernel matrix. Then tr(UU ′) = tr(AKA′) = tr(AKK†KA′) = tr(ZK†Z′), where K† is the Moore-Penrose pseudo-inverse (recall KK†K = K and K†KK† = K†), therefore (2) = min Z L(Z, Θ) + 1 2 tr(ZK†Z′). (3) This is essentially the value regularization framework [28]. Importantly, the objective in (3) is jointly convex in Z and K, since tr(ZK†Z) is a perspective function [29]. Therefore, although the single layer model is not jointly convex in the input features Φ and model parameters U, it is convex in the equivalent reparameterization (K, Z) given Θ. This is the technique used by [25] for the output layer. Finally note that Z satisfies the constraint Z ∈Rm×nΦ := {UΦ : U ∈Rm×n}, which we will write as Z ∈RΦ for convenience. Clearly it is equivalent to Z ∈RK. 2.3 Third Obstacle: Joint Input-Output Optimization The third key obstacle is that each of the latent variables, Φ and Θ, simultaneously serve as the inputs and output targets for successive layers. Therefore, it is necessary to reformulate the connecting problem (2) so that it is jointly convex in all three components, U, Φ and Θ; and unfortunately (3) is not convex in Θ. Although this appears to be an insurmountable obstacle in general, [25] propose an exact reformulation in the case when Θ is boolean valued (consistent with the probabilistic assumptions underlying a PFM or RBM) by assuming the loss function satisfies an additional postulate. Postulate 1. L(Z, Θ) can be rewritten as Lu(Θ′Z, Θ′Θ) for Lu jointly convex in both arguments. Intuitively, this assumption allows the loss to be parameterized in terms of the propensity matrix Θ′Z and the unnormalized output kernel Θ′Θ (hence the superscript of Lu). That is, the (i, j)-th component of Θ′Z stands for the linear response value of example j with respect to the label of the example i. The j-th column therefore encodes the propensity of example j to all other examples. This reparameterization is critical because it bypasses the linear response value, and relies solely on 1 The terms ∥W∥2, ∥U∥2 and ∥V ∥2 are regularizers, where the norm is the Frobenius norm. For clarity we have omitted the regularization parameters, relative weightings between different layers, and offset weights from the model. These components are obviously important in practice, however they play no key role in the technical development and removing them greatly simplifies the expressions. 3 the relationship between pairs of examples. The work [25] proposes a particular multi-label prediction loss that satisfies Postulate 1 for boolean target vectors θi; we propose an alternative below. Using Postulate 1 and again letting Z = UΦ, one can then rewrite the objective in (2) as Lu(Θ′UΦ, Θ′Θ) + 1 2 ∥U∥2. Now if we denote N := Θ′Θ and S := Θ′Z = Θ′UΦ (hence S ∈Θ′RΦ = NRK), the formulation can be reduced to the following (see Appendix A): (2) = min S Lu(S, N) + 1 2 tr(K†S′N †S). (4) Therefore, Postulate 1 allows (2) to be re-expressed in a form where the objective is jointly convex in the propensity matrix S and output kernel N. Given that N is a discrete but positive semidefinite matrix, a final relaxation is required to achieve a convex training problem. Postulate 2. The domain of N =Θ′Θ can be relaxed to a convex set preserving sufficient structure. Below we will introduce an improved scheme for such relaxation. Although these developments support a convex formulation of two-layer model training [25], they appear insufficient for deeper models. For example, by applying (3) and (4) to the three-layer model of Figure 1, one obtains Lu 1(S1, N1)+ 1 2 tr(K†S′ 1N † 1S1)+Lu 2(S2, N2)+ 1 2 tr(N † 1S′ 2N † 2S2)+L3(Z3, Y )+ 1 2 tr(Z3N † 2Z′ 3), where N1 = Φ′Φ and N2 = Θ′Θ are two latent kernels imposed between the input and output. Unfortunately, this objective is not jointly convex in all variables, since tr(N † 1S′ 2N † 2S2) is not jointly convex in (N1, S2, N2), hence the approach of [25] cannot extend beyond a single hidden layer. 3 Multi-layer Convex Modeling via Normalized Kernels Although obtaining a convex formulation for general multi-layer models appears to be a significant challenge, progress can be made by considering an alternative approach. The failure of the previous development in [25] can be traced back to (2), which eventually causes the coupled, non-convex regularization to occur between connected latent kernels. A natural response therefore is to reconsider the original regularization scheme, keeping in mind that the representer theorem must still be supported. One such regularization scheme appears has been investigated in the clustering literature [30, 31], which suggests a reformulation of the connecting model (2) using value regularization [28]: min U L(UΦ, Θ) + 1 2∥Θ′U∥2. (5) Here ∥Θ′U∥2 replaces ∥U∥2 from (2). The significance of this reformulation is that it still admits the representer theorem, which implies that the optimal U must be of the form U = (ΘΘ′)†AΦ′ for some A ∈Rm×n. Now, since Θ generally has full row rank (i.e. there are more examples than labels), one may execute a change of variables A = ΘB. Such a substitution leads to the regularizer
Θ′(ΘΘ′)†ΘBΦ′
2, which can be expressed in terms of the normalized output kernel [30]: M := Θ′(ΘΘ′)†Θ. (6) The term (ΘΘ′)† essentially normalizes the spectrum of the kernel Θ′Θ, and it is obvious that all eigen-values of M are either 0 or 1, i.e. M 2 = M [30]. The regularizer can be finally written as ∥MBΦ′∥2 = tr(MBKB′M) = tr(MBKK†KB′M) = tr(SK†S′), where S := MBK. (7) It is easy to show S = Θ′Z = Θ′UΦ, which is exactly the propensity matrix. As before, to achieve a convex training formulation, additional structure must be postulated on the loss function, but now allowing convenient expression in terms of normalized latent kernels. Postulate 3. The loss L(Z, Θ) can be written as Ln(Θ′Z, Θ′(ΘΘ′)†Θ) where Ln is jointly convex in both arguments. Here we write Ln to emphasize the use of normalized kernels. Under Postulate 3, an alternative convex objective can be achieved for a local connecting model Ln(S, M) + 1 2 tr(SK†S′), where S ∈MRK. (8) Crucially, this objective is now jointly convex in S, M and K; in comparison to (4), the normalization has removed the output kernel from the regularizer. The feasible region {(S, M, K) : M ⪰ 0, K ⪰0, S ∈MRK} is also convex (see Appendix B). Applying (8) to the first two layers and (3) to the output layer, a fully convex objective for a multi-layer model (e.g., as in Figure 1) is obtained: Ln 1(S1, M1) + 1 2 tr(S1K†S′ 1) + Ln 2(S2, M2) + 1 2 tr(S2M † 1S′ 2) + L3(Z3, Y ) + 1 2 tr(Z3M † 2Z′ 3), (9) where S1 ∈M1RK, S2 ∈M2RM1, and Z3 ∈RM2.2 All that remains is to design a convex relaxation of the domain of M (for Postulate 2) and to design the loss Ln (for Postulate 3). 2 Clearly the first layer can still use (4) with an unnormalized output kernel N1 since its input X is observed. 4 3.1 Convex Relaxation of the Domain of Output Kernels M Clearly from its definition (6), M has a non-convex domain in general. Ideally one should design convex relaxations for each domain of Θ. However, M exhibits some nice properties for any Θ: M ⪰0, M ⪯I, tr(M) = tr((ΘΘ′)†(ΘΘ′)) = rank(ΘΘ′) = rank(Θ). (10) Here I is the identity matrix, and we also use M ⪰0 to encode M ′ = M. Therefore, tr(M) provides a convenient proxy for controlling the rank of the latent representation, i.e. the number of hidden nodes in a layer. Given a specified number of hidden nodes h, we may enforce tr(M) = h. The main relaxation introduced here is replacing the eigenvalue constraint λi(M) ∈{0, 1} (implied by M 2 = M) with 0 ≤λi(M) ≤1. Such a relaxation retains sufficient structure to allow, e.g., a 2-approximation of optimal clustering to be preserved even by only imposing spectral constraints [30]. Experimental results below further demonstrate that nesting preserves sufficient structure, even with relaxation, to capture relationships that cannot be recovered by shallower architectures. More refined constraints can be included to better account for the domain of Θ. For example, if Θ expresses target values for a multiclass classification (i.e. Θij ∈{0, 1}, Θ′1 = 1 where 1 is a vector of all one’s), we further have Mij ≥0 and M1 = 1. If Θ corresponds to multilabel classification where each example belongs to exactly k (out of the h) labels (i.e. Θ ∈{0, 1}h×t, Θ′1 = k1), then M can have negative elements, but the spectral constraint M1 = 1 still holds (see proof in Appendix C). So we will choose the domains for M1 and M2 in (9) to consist of the spectral constraints: M := {0 ⪯M ⪯I : M1 = 1, tr(M) = h}. (11) 3.2 A Jointly Convex Multi-label Loss for Normalized Kernels An important challenge is to design an appropriate nonlinear loss to connect each layer of the model. Rather than conditional log-likelihood in a generative model, [25] introduced the idea of a using large margin, multi-label loss between a linear response, z, and a boolean target vector, y ∈{0, 1}h: ˜L(z, y) = max(1 −y + k z −1(y′z)) (12) where 1 denotes the vector of all 1s. Intuitively this encourages the responses on the active labels, y′z, to exceed k times the response of any inactive label, kzi, by a margin, where the implicit nonlinear transfer is a step function. Remarkably, this loss can be shown to satisfy Postulate 1 [25]. This loss can be easily adapted to the normalized case as follows. We first generalize the notion of margin to consider a a “normalized label” (Y Y ′)†y: L(z, y) = max(1 −(Y Y ′)†y + k z −1(y′z)) To obtain some intuition, consider the multiclass case where k = 1. In this case, Y Y ′ is a diagonal matrix whose (i, i)-th element is the number of examples in each class i. Dividing by this number allows the margin requirement to be weakened for popular labels, while more focus is shifted to less represented labels. For a given set of t paired input/output pairs (Z, Y ) the sum of the losses can then be compactly expressed as L(Z, Y ) = P j L(zj, yj) = τ(kZ −(Y Y ′)†Y ) + t −tr(Y ′Z), where τ(Γ) := P j maxi Γij. This loss can be shown to satisfy that satisfies Postulate 3:3 Ln(S, M) = τ(S −1 kM) + t −tr(S), where S = Y ′Z and M = Y ′(Y Y ′)†Y. (13) This loss can be naturally interpreted using the remark following Postulate 1. It encourages that the propensity of example j with respect to itself, Sjj, should be higher than its propensity with respect to other examples, Sij, by a margin that is defined through the normalized kernel M. However note this loss does not correspond to a linear transfer between layers, even in terms of the propensity matrix S or normalized output kernel M. As in all large margin methods, the initial loss (12) is a convex upper bound for an underlying discrete loss defined with respect to a step transfer. 4 Efficient Optimization Efficient optimization for the multi-layer model (9) is challenging, largely due to the matrix pseudoinverse. Fortunately, the constraints on M are all spectral, which makes it easier to apply conditional gradient (CG) methods [32]. This is much more convenient than the models based on unnormalized kernels [25], where the presence of both spectral and non-spectral constraints necessitated expensive algorithms such as alternating direction method of multipliers [33]. 3 A simple derivation extends [25]: τ(kZ −(Y Y ′)†Y ) = maxΛ:Rm×t + :Λ′1=1 tr(Λ′(kZ −(Y Y ′)†Y )) = maxΩ:Rt×t + :Ω′1=1 1 k tr(Ω′Y ′(kZ −(Y Y ′)†Y )) = τ(Y ′Z −1 kM). Here the second equality follows because for any Λ ∈Rm×t + satisfying Λ′1 = 1, there must be an Ω∈Rt×t + satisfying Ω′1 = 1 and Λ = Y Ω/k. 5 Algorithm 1: Conditional gradient algorithm to optimize f(M1, M2) for M1, M2 ∈M. 1 Initialize ˜ M1 and ˜ M2 with some random matrices. 2 while s = 1, 2, . . . do 3 Compute the gradients G1 = ∂ ∂M1 f( ˜ M1, ˜ M2) and G2 = ∂ ∂M2 f( ˜ M1, ˜ M2). 4 Compute the new bases M s 1 and M s 2 by invoking oracle (15) with G1 and G2 respectively. 5 Totally corrective update: minα∈∆s,β∈∆s f Ps i=1 αiM i 1, Ps i=1 βiM i 2 . 6 Set ˜ M1 = Ps i=1 αiM i 1 and ˜ M2 = Ps i=1 βiM i 2; break if stopping criterion is met. 7 return ( ˜ M1, ˜ M2). Denote the objective in (9) as g(M1, M2, S1, S2, Z3). The idea behind our approach is to optimize f(M1, M2) := min S1∈M1RK,S2∈M2RM1,Z3∈RM2 g(M1, M2, S1, S2, Z3) (14) by CG; see Algorithm 1 for details. We next demonstrate how each step can be executed efficiently. Oracle problem in Step 4. This requires solving, given a gradient G (which is real symmetric), max M∈M tr(−GM) ⇔ max 0⪯M1⪯I, tr(M1)=h−1 tr(−G(HM1H + 1 t 11′)), where H = I −1 t 11′. (15) Here we used Lemma 1 of [31]. By [34, Theorem 3.4], max0⪯M1⪯I, tr(M1)=h−1 tr(−HGHM1) = Ph−1 i=1 λi where λ1 ≥λ2 ≥. . . are the leading eigenvalues of −HGH. The maximum is attained at M1 = Ph−1 i=1 viv′ i, where vi is the eigenvector corresponding to λi. The optimal solution to argmaxM∈M tr(−GM) can be recovered by Ph−1 i=1 viv′ i + 1 t 11′, which has low rank for small h. Totally corrective update in Step 5. This is the most computationally intensive step of CG: min α∈∆s, β∈∆s f Xs i=1 αiM i 1, Xs i=1 βiM i 2 , (16) where ∆s stands for the s dimensional probability simplex (sum up to 1). If one can solve (16) efficiently (which also provides the optimal S1, S2, Z3 in (14) for the optimal α and β), then the gradient of f can also be obtained easily by Danskin’s theorem (for Step 3 of Algorithm 1). However, the totally corrective update is expensive because given α and β, each evaluation of the objective f itself requires an optimization over S1, S2, and Z3. Such a nested optimization can be prohibitive. A key idea is to show that this totally corrective update can be accomplished with considerably improved efficiency through the use of block coordinate descent [35]. Taking into account the structure of the solution to the oracle, we denote M1(α) := X i αiM i 1 = V1D(α)V ′ 1, and M2(β) := X i βiM i 2 = V2D(β)V ′ 2, (17) where D(α) = diag([α11′ h, α21′ h, . . .]′) and D(β) = diag([β11′ h, β21′ h, . . .]′). Denote P(α, β, S1, S2, Z3) := g (M1(α), M2(β), S1, S2, Z3) . (18) Clearly S1 ∈M1(α)RK iff S1 = V1A1K for some A1, S2 ∈M2(β)RM1(α) iff S2 = V2A2M1(α) for some A2, and Z3 ∈RM2(β) iff Z3 = A3M2(β) for some A3. So (16) is equivalent to min α∈∆s, β∈∆s,A1,A2,A3P (α, β, V1A1K, V2A2M1(α), A3M2(β)) (19) = Ln 1(V1A1K, M1(α)) + 1 2 tr(V1A1KA′ 1V ′ 1) (20) + Ln 2(V2A2M1(α), M2(β)) + 1 2 tr(V2A2M1(α)A′ 2V ′ 2) (21) + L3(A3M2(β), Y ) + 1 2 tr(A3M2(β)A′ 3). (22) Thus we have eliminated all matrix pseudo-inverses. However, it is still expensive because the size of Ai depends on t. To simplify further, assume X′, V1 and V2 all have full column rank.4 Denote B1 = A1X′ (note K = X′X), B2 = A2V1, B3 = A3V2. Noting (17), the objective becomes 4 This assumption is valid provided the features in X are linearly independent, since the bases (eigenvectors) accumulated through all iterations so far are also independent. The only exception is the eigen-vector 1 √ t1. But since α and β lie on a simplex, it always contributes a constant 1 t 11′ to M1(α) and M2(β). 6 R(α, β, B1, B2, B3) := Ln 1(V1B1X, V1D(α)V ′ 1) + 1 2 tr(V1B1B′ 1V ′ 1) (23) + Ln 2(V2B2D(α)V ′ 1, V2D(β)V ′ 2) + 1 2 tr(V2B2D(α)B′ 2V ′ 2) (24) + L3(B3D(β)V ′ 2, Y ) + 1 2 tr(B3D(β)B′ 3). (25) This problem is much easier to solve, since the size of Bi depends on the number of input features, the number of nodes in two latent layers, and the number of output labels. Due to the greedy nature of CG, the number of latent nodes is generally low. So we can optimize R by block coordinate descent (BCD), i.e. alternating between: 1. Fix (α, β), and solve (B1, B2, B3) (unconstrained smooth optimization, e.g. by LBFGS). 2. Fix (B1, B2, B3), and solve (α, β) (e.g. by LBFGS with projection to simplex). BCD is guaranteed to converge to a critical point when Ln 1, Ln 2 and L3 are smooth.5 In practice, these losses can be made smooth by, e.g. approximating the max in (13) by a softmax. It is crucial to note that although both of the two steps are convex, R is not jointly convex in its variables. So in general, this alternating scheme can only produce a stationary point of R. Interestingly, we further show that any stationary point must provide a global optimal solution to P in (18). Theorem 1. Suppose (α, β, B1, B2, B3) is a stationary point of R with αi > 0 and βi > 0. Assume X′, V1 and V2 all have full column rank. Then it must be a globally optimal solution to R, and this (α, β) must be an optimal solution to the totally corrective update (16). See the proof in Appendix D. It is noteworthy that the conditions αi > 0 and βi > 0 are trivial to meet because CG is guaranteed to converge to optimal if αi ≥1/s and βi ≥1/s at each step s. 5 Empirical Investigation To investigate the potential of deep versus shallow convex training methods, and global versus local training methods, we implemented the approach outlined above for a three-layer model along with comparison methods. Below we use CVX3 and CVX2 to refer respectively to three and two-layer versions of the proposed model. For comparison, SVM1 refers to a one-layer SVM; and TS1a [37] and TS1b [38] refer to one-layer transductive SVMs; NET2 refers to a standard two-layer sigmoid neural network with hidden layer size chosen by cross-validation; and LOC3 refers to the proposed three-layer model with exact (unrelaxed) with local optimization. In these evaluations, we followed a similar transductive set up to that of [25]: a given set of data (X, Y ) is divided into separate training and test sets, (XL, YL) and XU, where labels are only included for the training set. The training loss is then only computed on the training data, but the learned kernel matrices span the union of data. For testing, the kernel responses on test data are used to predict output labels. 5.1 Synthetic Experiments Our first goal was to compare the effective modeling capacity of a three versus two-layer architecture given the convex formulations developed above. In particular, since the training formulation involves a convex relaxation of the normalized kernel domain, M in (11), it is important to determine whether the representational advantages of a three versus two-layer architecture are maintained. We conducted two sets of experiments designed to separate one-layer from two-layer or deeper models, and two-layer from three-layer or deeper models. Although separating two from one-layer models is straightforward, separating three from two-layer models is a subtler question. Here we considered two synthetic settings defined by basic functions over boolean features: Parity: y = x1 ⊕x2 ⊕. . . ⊕xn, (26) Inner Product: y = (x1 ∧xm+1) ⊕(x2 ∧xm+2) ⊕. . . ⊕(xm ∧xn), where m = n 2 . (27) It is well known that Parity is easily computable by a two-layer linear-gate architecture but cannot be approximated by any one-layer linear-gate architecture on the same feature space [39]. The IP problem is motivated by a fundamental result in the circuit complexity literature: any small weights threshold circuit of depth 2 requires size exp(Ω(n)) to compute (27) [39, 40]. To generate data from 5Technically, for BCD to converge to a critical point, each block optimization needs to have a unique optimal solution. To ensure uniqueness, we used a method equivalent to the proximal method in Proposition 7 of [36]. 7 10 15 20 25 30 35 10 15 20 25 30 35 Error of CVX2 Error of CVX3 (a) Synthetic results: Parity data. CIFAR MNIST USPS COIL Letter TS1a 30.7 ±4.2 16.3 ±1.5 12.7 ±1.2 16.0 ±2.0 5.7 ±2.0 TS1b 26.0 ±6.5 16.0 ±2.0 11.0 ±1.7 20.0 ±3.6 5.0 ±1.0 SVM1 33.3 ±1.9 18.3 ±0.5 12.7 ±0.2 16.3 ±0.7 7.0 ±0.3 NET2 30.7 ±1.7 15.3 ±1.7 12.7 ±0.4 15.3 ±1.4 5.3 ±0.5 CVX2 27.7 ±5.5 12.7 ±3.2 9.7 ±3.1 14.0 ±3.6 5.7 ±2.9 LOC3 36 ±1.7 22.0 ±1.7 12.3 ±1.1 17.7 ±2.2 11.3 ±0.2 CVX3 23.3 ±0.5 13.0 ±0.3 9.0 ±0.9 9.0 ±0.3 5.7 ±0.2 (b) Real results: Test error % (± stdev) 100/100 labeled/unlabeled. 15 20 25 30 35 40 45 50 15 20 25 30 35 40 45 50 Error of CVX2 Error of CVX3 (c) Synthetic results: IP data. CIFAR MNIST USPS COIL Letter TS1a 32.0 ±2.6 10.7 ±3.1 10.3 ±0.6 13.7 ±4.0 3.8 ±0.3 TS1b 26.0 ±3.3 10.0 ±3.5 11.0 ±1.3 18.9 ±2.6 4.0 ±0.5 SVM1 32.3 ±1.6 12.3 ±1.4 10.3 ±0.1 14.7 ±1.3 4.8 ±0.5 NET2 30.7 ±0.5 11.3 ±1.3 11.2 ±0.5 14.5 ±0.6 4.3 ±0.1 CVX2 23.3 ±3.5 8.2 ±0.6 7.0 ±1.3 8.7 ±3.3 4.5 ±0.9 LOC3 28.2 ±2.3 12.7 ±0.6 8.0 ±0.1 12.3 ±0.9 7.3 ±1.1 CVX3 19.2 ±0.9 6.8 ±0.4 6.2 ±0.7 7.7 ±1.1 3.0 ±0.2 (d) Real results: Test error % (± stdev) 200/200 labeled/unlabeled. Figure 2: Experimental results (synthetic data: larger dots mean repetitions fall on the same point). these models, we set the number of input features to n = 8 (instead of n = 2 as in [25]), then generate 200 examples for training and 100 examples for testing; for each example, the features xi were drawn from {0, 1} with equal probability. Then each xi was corrupted independently by a Gaussian noise with zero mean and variance 0.3. The experiments were repeated 100 times, and the resulting test errors of the two models are plotted in Figure 2. Figure 2(c) clearly shows that CVX3 is able to capture the structure of the IP problem much more effectively than CVX2, as the theory suggests for such architectures. In almost every repetition, CVX3 yields a lower (often much lower) test error than CVX2. Even on the Parity problem (Figure 2(a)), CVX3 generally produces lower error, although its advantage is not as significant. This is also consistent with theoretical analysis [39, 40], which shows that IP is harder to model than parity. 5.2 Experiments on Real Data We also conducted an empirical investigation on some real data sets. Here we tried to replicate the results of [25] on similar data sets, USPS and COIL from [41], Letter from [42], MNIST, and CIFAR-100 from [43]. Similar to [23], we performed an optimistic model selection for each method on an initial sample of t training and t test examples; then with the parameters fixed the experiments were repeated 5 times on independently drawn sets of t training and t test examples from the remaining data. The results shown in Table 2(b) and Table 2(d) show that CVX3 is able to systematically reduce the test error of CVX2. This suggests that the advantage of deeper modeling does indeed arise from enhanced representation ability, and not merely from an enhanced ability to escape local minima or walk plateaus, since neither exist in these cases. 6 Conclusion We have presented a new formulation of multi-layer training that can accommodate an arbitrary number of nonlinear layers while maintaining a jointly convex training objective. Accurate learning of additional layers, when required, appears to demonstrate a marked advantage over shallower architectures, even when models can be trained to optimality. Aside from further improvements in algorithmic efficiency, an interesting direction for future investigation is to capture unsupervised “stage-wise” training principles via auxiliary autoencoder objectives within a convex framework, rather than treating input reconstruction as a mere heuristic training device. 8 References [1] G. Dahl, D. Yu, L. Deng, and A. Acero. On the problem of local minima in backpropagation. IEEE Trans. ASLP, 20(1):30–42, 2012. [2] A. Krizhevsky, A. Sutskever, and G. Hinton. ImageNet classification with deep convolutional neural networks. In NIPS. 2012. [3] Q. Le, M. Ranzato, R. Monga, M. Devin, G. Corrado, K. Chen, J. Dean, and A. Ng. Building high-level features using large scale unsupervised learning. In Proceedings ICML. 2012. [4] R. Socher, C. Lin, A. Ng, and C. Manning. Parsing natural scenes and natural language with recursive neural networks. In ICML. 2011. [5] Y. Bengio. Learning deep architectures for AI. Found. Trends in Machine Learning, 2:1–127, 2009. [6] Y. Bengio, A. Courville, and P. Vincent. Representation learning: A review and new perspectives. IEEE PAMI, 35(8):1798–1828, 2013. [7] G. Tesauro. Temporal difference learning and TD-Gammon. CACM, 38(3), 1995. [8] Y. LeCun, B. Boser, J. Denker, D. Henderson, R. Howard, W. Hubbard, and L. Jackel. Backpropagation applied to handwritten zip code recognition. Neural Comput., 1:541–551, 1989. [9] M. Gori and A. Tesi. On the problem of local minima in backpropagation. IEEE PAMI, 14:76–86, 1992. [10] D. Erhan, Y. Bengio, A. Courville, P. Manzagol, and P. Vincent. Why does unsupervised pre-training help deep learning? JMLR, 11:625–660, 2010. [11] I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the importance of initialization and momentum in deep learning. In ICML. 2013. [12] G. Hinton, S. Osindero, and Y. Teh. A fast algorithm for deep belief nets. Neur. Comp., 18(7), 2006. [13] P. Vincent, H. L. I. Lajoie, Y. Bengio, and P. Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. JMLR, 11(3):3371–3408, 2010. [14] G. Hinton, N. Srivastava, A. Krizhevsky, A. Sutskever, and R. Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors, 2012. ArXiv:1207.0580. [15] K. Hoeffgen, H. Simon, and K. Van Horn. Robust trainability of single neurons. JCSS, 52:114–125, 1995. [16] S. Arora, A. Bhaskara, R. Ge, and T. Ma. Bounds for learning deep representations. In ICML. 2014. [17] R. Livni, S. Shalev-Shwartz, and O. Shamir. An algorithm for training polynomial networks, 2014. ArXiv:1304.7045v2. [18] R. Gens and P. Domingos. Discriminative learning of sum-product networks. In NIPS 25. 2012. [19] G. Kimeldorf and G. Wahba. Some results on Tchebycheffian spline functions. JMAA, 33:82–95, 1971. [20] B. Schoelkopf and A. Smola. Learning with Kernels. MIT Press, 2002. [21] Y. Cho and L. Saul. Large margin classification in infinite neural networks. Neural Comput., 22, 2010. [22] J. Zhuang, I. Tsang, and S. Hoi. Two-layer multiple kernel learning. In AISTATS. 2011. [23] A. Joulin and F. Bach. A convex relaxation for weakly supervised classifiers. In Proceedings ICML. 2012. [24] A. Joulin, F. Bach, and J. Ponce. Multi-class cosegmentation. In Proceedings CVPR. 2012. [25] O. Aslan, H. Cheng, D. Schuurmans, and X. Zhang. Convex two-layer modeling. In NIPS. 2013. [26] R. Neal. Connectionist learning of belief networks. Artificial Intelligence, 56(1):71–113, 1992. [27] V. Nair and G. E. Hinton. Rectified linear units improve restricted Boltzmann machines. In ICML. 2010. [28] R. Rifkin and R. Lippert. Value regularization and Fenchel duality. JMLR, 8:441–479, 2007. [29] A. Argyriou, T. Evgeniou, and M. Pontil. Convex multi-task feature learning. Mach. Learn., 73, 2008. [30] J. Peng and Y. Wei. Approximating k-means-type clustering via semidefinite programming. SIAM J. on Optimization, 18:186–205, 2007. [31] H. Cheng, X. Zhang, and D. Schuurmans. Convex relaxations of Bregman clustering. In UAI. 2013. [32] M. Jaggi. Revisiting Frank-Wolfe: Projection-free sparse convex optimization. In ICML. 2013. [33] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends in Machine Learning, 3(1):1–123, 2010. [34] M. Overton and R. Womersley. Optimality conditions and duality theory for minimizing sums of the largest eigenvalues of symmetric matrices. Mathematical Programming, 62:321–357, 1993. [35] F. Dinuzzo, C. S. Ong, P. Gehler, and G. Pillonetto. Learning output kernels with block coordinate descent. In ICML. 2011. [36] L. Grippoa and M. Sciandrone. On the convergence of the block nonlinear Gauss-Seidel method under convex constraints. Operations Research Letters, 26:127–136, 2000. [37] V. Sindhwani and S. Keerthi. Large scale semi-supervised linear SVMs. In SIGIR. 2006. [38] T. Joachims. Transductive inference for text classification using support vector machines. In ICML. 1999. [39] A. Hajnal. Threshold circuits of bounded depth. J. of Computer & System Sciences, 46(2):129–154, 1993. [40] A. A. Razborov. On small depth threshold circuits. In Algorithm Theory (SWAT 92). 1992. [41] Http://olivier.chapelle.cc/ssl- book/benchmarks.html. [42] Http://archive.ics.uci.edu/ml/datasets. [43] Http://www.cs.toronto.edu/ kriz/cifar.html. 9
|
2014
|
120
|
5,204
|
Discovering Structure in High-Dimensional Data Through Correlation Explanation Greg Ver Steeg Information Sciences Institute University of Southern California Marina del Rey, CA 90292 gregv@isi.edu Aram Galstyan Information Sciences Institute University of Southern California Marina del Rey, CA 90292 galstyan@isi.edu Abstract We introduce a method to learn a hierarchy of successively more abstract representations of complex data based on optimizing an information-theoretic objective. Intuitively, the optimization searches for a set of latent factors that best explain the correlations in the data as measured by multivariate mutual information. The method is unsupervised, requires no model assumptions, and scales linearly with the number of variables which makes it an attractive approach for very high dimensional systems. We demonstrate that Correlation Explanation (CorEx) automatically discovers meaningful structure for data from diverse sources including personality tests, DNA, and human language. 1 Introduction Without any prior knowledge, what can be automatically learned from high-dimensional data? If the variables are uncorrelated then the system is not really high-dimensional but should be viewed as a collection of unrelated univariate systems. If correlations exist, however, then some common cause or causes must be responsible for generating them. Without assuming any particular model for these hidden common causes, is it still possible to reconstruct them? We propose an informationtheoretic principle, which we refer to as “correlation explanation”, that codifies this problem in a model-free, mathematically principled way. Essentially, we are searching for latent factors so that, conditioned on these factors, the correlations in the data are minimized (as measured by multivariate mutual information). In other words, we look for the simplest explanation that accounts for the most correlations in the data. As a bonus, building on this information-based foundation leads naturally to an innovative paradigm for learning hierarchical representations that is more tractable than Bayesian structure learning and provides richer insights than neural network inspired approaches [1]. After introducing the principle of “Correlation Explanation” (CorEx) in Sec. 2, we show that it can be efficiently implemented in Sec. 3. To demonstrate the power of this approach, we begin Sec. 4 with a simple synthetic example and show that standard learning techniques all fail to detect highdimensional structure while CorEx succeeds. In Sec. 4.2.1, we show that CorEx perfectly reverse engineers the “big five” personality types from survey data while other approaches fail to do so. In Sec. 4.2.2, CorEx automatically discovers in DNA nearly perfect predictors of independent signals relating to gender, geography, and ethnicity. In Sec. 4.2.3, we apply CorEx to text and recover both stylistic features and hierarchical topic representations. After briefly considering intriguing theoretical connections in Sec. 5, we conclude with future directions in Sec. 6. 2 Correlation Explanation Using standard notation [2], capital X denotes a discrete random variable whose instances are written in lowercase. A probability distribution over a random variable X, pX(X = x), is shortened 1 to p(x) unless ambiguity arises. The cardinality of the set of values that a random variable can take will always be finite and denoted by |X|. If we have n random variables, then G is a subset of indices G ✓Nn = {1, . . . , n} and XG is the corresponding subset of the random variables (XNn is shortened to X). Entropy is defined in the usual way as H(X) ⌘EX[−log p(x)]. Higherorder entropies can be constructed in various ways from this standard definition. For instance, the mutual information between two random variables, X1 and X2 can be written I(X1 : X2) = H(X1) + H(X2) −H(X1, X2). The following measure of mutual information among many variables was first introduced as “total correlation” [3] and is also called multi-information [4] or multivariate mutual information [5]. TC(XG) = X i2G H(Xi) −H(XG) (1) For G = {i1, i2}, this corresponds to the mutual information, I(Xi1 : Xi2). TC(XG) is nonnegative and zero if and only if the probability distribution factorizes. In fact, total correlation can also be written as a KL divergence, TC(XG) = DKL(p(xG)|| Q i2G p(xi)). The total correlation among a group of variables, X, after conditioning on some other variable, Y , is simply TC(X|Y ) = P i H(Xi|Y ) −H(X|Y ). We can measure the extent to which Y explains the correlations in X by looking at how much the total correlation is reduced. TC(X; Y ) ⌘TC(X) −TC(X|Y ) = X i2Nn I(Xi : Y ) −I(X : Y ) (2) We use semicolons as a reminder that TC(X; Y ) is not symmetric in the arguments, unlike mutual information. TC(X|Y ) is zero (and TC(X; Y ) maximized) if and only if the distribution of X’s conditioned on Y factorizes. This would be the case if Y were the common cause of all the Xi’s in which case Y explains all the correlation in X. TC(XG|Y ) = 0 can also be seen as encoding local Markov properties among a group of variables and, therefore, specifying a DAG [6]. This quantity has appeared as a measure of the redundant information that the Xi’s carry about Y [7]. More connections are discussed in Sec. 5. Optimizing over Eq. 2 can now be seen as a search for a latent factor, Y , that explains the correlations in X. We can make this concrete by letting Y be a discrete random variable that can take one of k possible values and searching over all probabilistic functions of X, p(y|x). max p(y|x) TC(X; Y ) s.t. |Y | = k, (3) The solution to this optimization is given as a special case in Sec. A. Total correlation is a functional over the joint distribution, p(x, y) = p(y|x)p(x), so the optimization implicitly depends on the data through p(x). Typically, we have only a small number of samples drawn from p(x) (compared to the size of the state space). To make matters worse, if x 2 {0, 1}n then optimizing over all p(y|x) involves at least 2n variables. Surprisingly, despite these difficulties we show in the next section that this optimization can be carried out efficiently. The maximum achievable value of this objective occurs for some finite k when TC(X|Y ) = 0. This implies that the data are perfectly described by a naive Bayes model with Y as the parent and Xi as the children. Generally, we expect that correlations in data may result from several different factors. Therefore, we extend the optimization above to include m different factors, Y1, . . . , Ym.1 max Gj,p(yj|xGj ) m X j=1 TC(XGj; Yj) s.t. |Yj| = k, Gj \ Gj06=j = ; (4) Here we simultaneously search subsets of variables Gj and over variables Yj that explain the correlations in each group. While it is not necessary to make the optimization tractable, we impose an additional condition on Gj so that each variable Xi is in a single group, Gj, associated with a single “parent”, Yj. The reason for this restriction is that it has been shown that the value of the objective can then be interpreted as a lower bound on TC(X) [8]. Note that this objective is valid 1Note that in principle we could have just replaced Y in Eq. 3 with (Y1, . . . , Ym), but the state space would have been exponential in m, leading to an intractable optimization. 2 and meaningful regardless of details about the data-generating process. We only assume that we are given p(x) or iid samples from it. The output of this procedure gives us Yj’s, which are probabilistic functions of X. If we iteratively apply this optimization to the resulting probability distribution over Y by searching for some Z1, . . . , Z ˜m that explain the correlations in the Y ’s, we will end up with a hierarchy of variables that forms a tree. We now show that the optimization in Eq. 4 can be carried out efficiently even for high-dimensional spaces and small numbers of samples. 3 CorEx: Efficient Implementation of Correlation Explanation We begin by re-writing the optimization in Eq. 4 in terms of mutual informations using Eq. 2. max G,p(yj|x) m X j=1 X i2Gj I(Yj : Xi) − m X j=1 I(Yj : XGj) (5) Next, we replace G with a set indicator variable, ↵i,j = I[Xi 2 Gj] 2 {0, 1}. max ↵,p(yj|x) m X j=1 n X i=1 ↵i,jI(Yj : Xi) − m X j=1 I(Yj : X) (6) The non-overlapping group constraint is enforced by demanding that P ¯j ↵i,¯j = 1. Note also that we dropped the subscript Gj in the second term of Eq. 6 but this has no effect because solutions must satisfy I(Yj : X) = I(Yj : XGj), as we now show. For fixed ↵, it is straightforward to find the solution of the Lagrangian optimization problem as the solution to a set of self-consistent equations. Details of the derivation can be found in Sec. A. p(yj|x) = 1 Zj(x)p(yj) n Y i=1 ✓p(yj|xi) p(yj) ◆↵i,j (7) p(yj|xi) = X ¯x p(yj|¯x)p(¯x)δ¯xi,xi/p(xi) and p(yj) = X ¯x p(yj|¯x)p(¯x) (8) Note that δ is the Kronecker delta and that Yj depends only on the Xi for which ↵i,j is non-zero. Remarkably, Yj’s dependence on X can be written in terms of a linear (in n, the number of variables) number of parameters which are just the marginals, p(yj), p(yj|xi). We approximate p(x) with the empirical distribution, ˆp(¯x) = PN l=1 δ¯x,x(l)/N. This approximation allows us to estimate marginals with fixed accuracy using only a constant number of iid samples from the true distribution. In Sec. A we show that Eq. 7, which defines the soft labeling of any x, can be seen as a linear function followed by a non-linear threshold, reminiscent of neural networks. Also note that the normalization constant for any x, Zj(x), can be calculated easily by summing over just |Yj| = k values. For fixed values of the parameters p(yj|xi), we have an integer linear program for ↵made easy by the constraint P ¯j ↵i,¯j = 1. The solution is ↵⇤ i,j = I[j = arg max¯j I(Xi : Y¯j)]. However, this leads to a rough optimization space. The solution in Eq. 7 is valid (and meaningful, see Sec. 5 and [8]) for arbitrary values of ↵so we relax our optimization accordingly. At step t = 0 in the optimization, we pick ↵t=0 i,j ⇠U(1/2, 1) uniformly at random (violating the constraints). At step t + 1, we make a small update on ↵in the direction of the solution. ↵t+1 i,j = (1 −λ)↵t i,j + λ↵⇤⇤ i,j (9) The second term, ↵⇤⇤ i,j = exp ' γ(I(Xi : Yj) −max¯j I(Xi : Y¯j)) ( , implements a soft-max which converges to the true solution for ↵⇤in the limit γ ! 1. This leads to a smooth optimization and good choices for λ, γ can be set through intuitive arguments described in Sec. B. Now that we have rules to update both ↵and p(yj|xi) to increase the value of the objective, we simply iterate between them until we achieve convergence. While there is no guarantee to find the global optimum, the objective is upper bounded by TC(X) (or equivalently, TC(X|Y ) is lower bounded by 0). Pseudo-code for this approach is described in Algorithm 1 with additional details provided in Sec. B and source code available online2. The overall complexity is linear in the number 2Open source code is available at http://github.com/gregversteeg/CorEx. 3 input : A matrix of size ns ⇥n representing ns samples of n discrete random variables set : Set m, the number of latent variables, Yj, and k, so that |Yj| = k output: Parameters ↵i,j, p(yj|xi), p(yj), p(y|x(l)) for i 2 Nn, j 2 Nm, l 2 Nns, y 2 Nk, xi 2 Xi Randomly initialize ↵i,j, p(y|x(l)); repeat Estimate marginals, p(yj), p(yj|xi) using Eq. 8; Calculate I(Xi : Yj) from marginals; Update ↵using Eq. 9; Calculate p(y|x(l)), l = 1, . . . , ns using Eq. 7; until convergence; Algorithm 1: Pseudo-code implementing Correlation Explanation (CorEx) of variables. To bound the complexity in terms of the number of samples, we can always use minibatches of fixed size to estimate the marginals in Eq. 8. A common problem in representation learning is how to pick m, the number of latent variables to describe the data. Consider the limit in which we set m = n. To use all Y1, . . . , Ym in our representation, we would need exactly one variable, Xi, in each group, Gj. Then 8j, TC(XGj) = 0 and, therefore, the whole objective will be 0. This suggests that the maximum value of the objective must be achieved for some value of m < n. In practice, this means that if we set m too high, only some subset of latent variables will be used in the solution, as we will demonstrate in Fig. 2. In other words, if m is set high enough, the optimization will result in some number of clusters m0 < m that is optimal with respect to the objective. Representations with different numbers of layers, different m, and different k can be compared according to how tight of a lower bound they provide on TC(X) [8]. 4 Experiments 4.1 Synthetic data 24 25 26 27 28 29 210 211 0.0 0.2 0.4 0.6 0.8 1.0 # Observed Variables, n Accuracy (ARI) CorEx CorEx Spectral* K-means ICA NMF* N.Net:RBM* PCA Spectral Bi* Isomap* LLE* Hierarch. *
Y1 X... X1 X... Yb Z Layer 2 1 0 Xc Xn Y... Synthetic model Figure 1: (Left) We compare methods to recover the clusters of variables generated according to the model. (Right) Synthetic data is generated according to a tree of latent variables. To test CorEx’s ability to recover latent structure from data we begin by generating synthetic data according to the latent tree model depicted in Fig. 1 in which all the variables are hidden except for the leaf nodes. The most difficult part of reconstructing this tree is clustering of the leaf nodes. If a clustering method can do that then the latent variables can be reconstructed for each cluster easily using EM. We consider many different clustering methods, typically with several variations 4 of each technique, details of which are described in Sec. C. We use the adjusted Rand index (ARI) to measure the accuracy with which inferred clusters recover the ground truth. 3 We generated samples from the model in Fig. 1 with b = 8 and varied c, the number of leaves per branch. The Xi’s depend on Yj’s through a binary erasure channel (BEC) with erasure probability δ. The capacity of the BEC is 1 −δ so we let δ = 1 −2/c to reflect the intuition that the signal from each parent node is weakly distributed across all its children (but cannot be inferred from a single child). We generated max(200, 2n) samples. In this example, all the Yj’s are weakly correlated with the root node, Z, through a binary symmetric channel with flip probability of 1/3. Fig. 1 shows that for a small to medium number of variables, all the techniques recover the structure fairly well, but as the dimensionality increases only CorEx continues to do so. ICA and hierarchical clustering compete for second place. CorEx also perfectly recovers the values of the latent factors in this example. For latent tree models, recovery of the latent factors gives a global optimum of the objective in Eq. 4. Even though CorEx is only guaranteed to find local optima, in this example it correctly converges to the global optimum over a range of problem sizes. Note that a growing literature on latent tree learning attempts to reconstruct latent trees with theoretical guarantees [9, 1]. In principle, we should compare to these techniques, but they scale as O(n2) −O(n5) (see [3], Table 1) while our method is O(n). In a recent survey on latent tree learning methods, only one out of 15 techniques was able to run on the largest dataset considered (see [3], Table 3), while most of the datasets in this paper are orders of magnitude larger than that one. 0 1 I(Yj : Xi) t = 0 t = 10 Uncorrelated variables i = 1, . . . , nv j = 1 ... m ↵i,j t = 50 Figure 2: (Color online) A visualization of structure learning in CorEx, see text for details. Fig. 2 visualizes the structure learning process.4 This example is similar to that above but includes some uncorrelated random variables to show how they are treated by CorEx. We set b = 5 clusters of variables but we used m = 10 hidden variables. At each iteration, t, we show which hidden variables, Yj, are connected to input variables, Xi, through the connectivity matrix, ↵(shown on top). The mutual information is shown on the bottom. At the beginning, we started with full connectivity, but with nothing learned we have I(Yj : Xi) = 0. Over time, the hidden units “compete” to find a group of Xi’s for which they can explain all the correlations. After only ten iterations the overall structure appears and by 50 iterations it is exactly described. At the end, the uncorrelated random variables (Xi’s) and the hidden variables (Yj’s) which have not explained any correlations can be easily distinguished and discarded (visually and mathematically, see Sec. B). 4.2 Discovering Structure in Diverse Real-World Datasets 4.2.1 Personality Surveys and the “Big Five” Personality Traits One psychological theory suggests that there are five traits that largely reflect the differences in personality types [1]: extraversion, neuroticism, agreeableness, conscientiousness and openness to experience. Psychologists have designed various instruments intended to measure whether individuals exhibit these traits. We consider a survey in which subjects rate fifty statements, such as, “I am the life of the party”, on a five point scale: (1) disagree, (2) slightly disagree, (3) neutral, (4) slightly agree, and (5) agree.5 The data consist of answers to these questions from about ten thousand test-takers. The test was designed with the intention that each question should belong to a 3Rand index counts the percentage of pairs whose relative classification matches in both clusterings. ARI adds a correction so that a random clustering will give a score of zero, while an ARI of 1 corresponds to a perfect match. 4A video is available online at http://isi.edu/˜gregv/corex_structure.mpg. 5Data and full list of questions are available at http://personality-testing.info/ _rawdata/. 5 cluster according to which personality trait the question gauges. Is it true that there are five factors that strongly predict the answers to these questions? CorEx learned a two-level hierarchical representation when applied to this data (full model shown in Fig. C.2). On the first level, CorEx automatically determined that the questions should cluster into five groups. Surprisingly, the five clusters exactly correspond to the big five personality traits as labeled by the test designers. It is unusual to recover the ground truth with perfect accuracy on an unsupervised learning problem so we tried a number of other standard clustering methods to see if they could reproduce this result. We display the results using confusion matrices in Fig. 3. The details of the techniques used are described in Sec. C but all of them had an advantage over CorEx since they required that we specify the correct number of clusters. None of the other techniques are able to recover the five personality types exactly. Interestingly, Independent Component Analysis (ICA) [1] is the only other method that comes close. The intuition behind ICA is that it find a linear transformation on the input that minimizes the multiinformation among the outputs (Yj). In contrast, CorEx searches for Yj’s so that multi-information among the Xi’s is minimized after conditioning on Y . ICA assumes that the signals that give rise to the data are independent while CorEx does not. In this case, personality traits like “extraversion” and “agreeableness” are correlated, violating the independence assumption. S/Eu/Ea(ARI:0.86) Subsah. Africa(ARI:0.98) Subsah. Africa(ARI:0.53) Subsah. Africa(ARI:0.55) Subsah. Africa(ARI:0.99) Subsah. Africa(ARI:0.98) Subsah. Africa(ARI:0.95) Subsah. Africa(ARI:0.92) Subsah. Africa(ARI:0.52) East(ARI:0.87) America(ARI:0.99) East(ARI:0.74) Oceania(ARI:1.00) East(ARI:0.87) America(ARI:0.55) East(ARI:0.51) gender(ARI:0.95) EurAsia(ARI:0.86) EurAsia(ARI:0.87) Predicted True * * * * Figure 3: (Left) Confusion matrix comparing predicted clusters to true clusters for the questions on the Big-5 personality test. (Right) Hierarchical model constructed from samples of DNA by CorEx. 4.2.2 DNA from the Human Genome Diversity Project Next, we consider DNA data taken from 952 individuals of diverse geographic and ethnic backgrounds [1]. The data consist of 4170 variables describing different SNPs (single nucleotide polymorphisms).6 We use CorEx to learn a hierarchical representation which is depicted in Fig. 3. To evaluate the quality of the representation, we use the adjusted Rand index (ARI) to compare clusters induced by each latent variable in the hierarchical representation to different demographic variables in the data. Latent variables which substantially match demographic variables are labeled in Fig. 3. The representation learned (unsupervised) on the first layer contains a perfect match for Oceania (the Pacific Islands) and nearly perfect matches for America (Native Americans), Subsaharan Africa, and gender. The second layer has three variables which correspond very closely to broad geographic regions: Subsaharan Africa, the “East” (including China, Japan, Oceania, America), and EurAsia. 4.2.3 Text from the Twenty Newsgroups Dataset The twenty newsgroups dataset consists of documents taken from twenty different topical message boards with about a thousand posts each [1]. For analyzing unstructured text, typical feature engineering approaches heuristically separate signals like style, sentiment, or topics. In principle, all 6Data, descriptions of SNPs, and detailed demographics of subjects is available at ftp://ftp.cephb. fr/hgdp_v3/. 6 three of these signals manifest themselves in terms of subtle correlations in word usage. Recent attempts at learning large-scale unsupervised hierarchical representations of text have produced interesting results [1], though validation is difficult because quantitative measures of representation quality often do not correlate well with human judgment [1]. To focus on linguistic signals, we removed meta-data like headers, footers, and replies even though these give strong signals for supervised newsgroup classification. We considered the top ten thousand most frequent tokens and constructed a bag of words representation. Then we used CorEx to learn a five level representation of the data with 326 latent variables in the first layer. Details are described in Sec. C.1. Portions of the first three levels of the tree keeping only nodes with the highest normalized mutual information with their parents are shown in Fig. 4 and in Fig. C.1.7
!
alt.atheism aa rel comp.graphics cg comp comp.os.ms-windows.misc cms comp comp.sys.ibm.pc.hardware cpc comp comp.sys.mac.hardware cmac comp comp.windows.x cwx comp misc.forsale mf misc rec.autos ra vehic rec.motorcycles rm vehic rec.sport.baseball rsb sport rec.sport.hockey rsh sport sci.crypt sc sci sci.electronics se sci sci.med sm sci sci.space ss sci soc.religion.christian src rel talk.politics.guns tpg talk talk.politics.mideast tmid talk talk.politics.misc tmisc talk talk.religion.misc trm rel Figure 4: Portions of the hierarchical representation learned for the twenty newsgroups dataset. We label latent variables that overlap significantly with known structure. Newsgroup names, abbreviations, and broad groupings are shown on the right. To provide a more quantitative benchmark of the results, we again test to what extent learned representations are related to known structure in the data. Each post can be labeled by the newsgroup it belongs to, according to broad categories (e.g. groups that include “comp”), or by author. Most learned binary variables were active in around 1% of the posts, so we report the fraction of activations that coincide with a known label (precision) in Fig. 4. Most variables clearly represent sub-topics of the newsgroup topics, so we do not expect high recall. The small portion of the tree shown in Fig. 4 reflects intuitive relationships that contain hierarchies of related sub-topics as well as clusters of function words (e.g. pronouns like “he/his/him” or tense with “have/be”). Once again, several learned variables perfectly captured known structure in the data. Some users sent images in text using an encoded format. One feature matched all the image posts (with perfect precision and recall) due to the correlated presence of unusual short tokens. There were also perfect matches for three frequent authors: G. Banks, D. Medin, and B. Beauchaine. Note that the learned variables did not trigger if just their names appeared in the text, but only for posts they authored. These authors had elaborate signatures with long, identifiable quotes that evaded preprocessing but created a strongly correlated signal. Another variable with perfect precision for the “forsale” newsgroup labeled comic book sales (but did not activate for discussion of comics in other newsgroups). Other nearly perfect predictors described extensive discussions of Armenia/Turkey in talk.politics.mideast (a fifth of all discussion in that group), specialized unix jargon, and a match for sci.crypt which had 90% precision and 55% recall. When we ranked all the latent factors according to a normalized version of Eq. 2, these examples all showed up in the top 20. 5 Connections and Related Work While the basic measures used in Eq. 1 and Eq. 2 have appeared in several contexts [7, 1, 4, 3, 1], the interpretation of these quantities is an active area of research [1, 2]. The optimizations we define have some interesting but less obvious connections. For instance, the optimization in Eq. 3 is similar 7An interactive tool for exploring the full hierarchy is available at http://bit.ly/corexvis. 7 to one recently introduced as a measure of “common information” [2]. The objective in Eq. 6 (for a single Yj) appears exactly as a bound on “ancestral” information [2]. For instance, if all the ↵i = 1/β then Steudel and Ay [2] show that the objective is positive only if at least 1 + β variables share a common ancestor in any DAG describing them. This provides extra rationale for relaxing our original optimization to include non-binary values of ↵i,j. The most similar learning approach to the one presented here is the information bottleneck [2] and its extension the multivariate information bottleneck [2, 2]. The motivation behind information bottleneck is to compress the data (X) into a smaller representation (Y ) so that information about some relevance term (typically labels in a supervised learning setting) is maintained. The second term in Eq. 6 is analogous to the compression term. Instead of maximizing a relevance term, we are maximizing information about all the individual sub-systems of X, the Xi. The most redundant information in the data is preferentially stored while uncorrelated random variables are completely ignored. The broad problem of transforming complex data into simpler, more meaningful forms goes under the rubric of representation learning [2] which shares many goals with dimensionality reduction and subspace clustering. Insofar as our approach learns a hierarchy of representations it superficially resembles “deep” approaches like neural nets and autoencoders [2, 2, 2, 3]. While those approaches are scalable, a common critique is that they involve many heuristics discovered through trial-anderror that are difficult to justify. On the other hand, a rich literature on learning latent tree models [3, 3, 9, 1] have excellent theoretical properties but do not scale well. By basing our method on an information-theoretic optimization that can nevertheless be performed quite efficiently, we hope to preserve the best of both worlds. 6 Conclusion The most challenging open problems today involve high-dimensional data from diverse sources including human behavior, language, and biology.8 The complexity of the underlying systems makes modeling difficult. We have demonstrated a model-free approach to learn successfully more coarsegrained representations of complex data by efficiently optimizing an information-theoretic objective. The principle of explaining as much correlation in the data as possible provides an intuitive and fully data-driven way to discover previously inaccessible structure in high-dimensional systems. It may seem surprising that CorEx should perfectly recover structure in diverse domains without using labeled data or prior knowledge. On the other hand, the patterns discovered are “low-hanging fruit” from the right point of view. Intelligent systems should be able to learn robust and general patterns in the face of rich inputs even in the absence of labels to define what is important. Information that is very redundant in high-dimensional data provides a good starting point. Several fruitful directions stand out. First, the promising preliminary results invite in-depth investigations on these and related problems. From a computational point of view, the main work of the algorithm involves a matrix multiplication followed by an element-wise non-linear transform. The same is true for neural networks and they have been scaled to very large data using, e.g., GPUs. On the theoretical side, generalizing this approach to allow non-tree representations appears both feasible and desirable [8]. Acknowledgments We thank Virgil Griffith, Shuyang Gao, Hsuan-Yi Chu, Shirley Pepke, Bilal Shaw, Jose-Luis Ambite, and Nathan Hodas for helpful conversations. This research was supported in part by AFOSR grant FA9550-12-1-0417 and DARPA grant W911NF-12-1-0034. References [1] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. In ICLR, 2014. 8In principle, computer vision should be added to this list. However, the success of unsupervised feature learning with neural nets for vision appears to rely on encoding generic priors about vision through heuristics like convolutional coding and max pooling [3]. Since CorEx is a knowledge-free method it will perform relatively poorly unless we find a way to also encode these assumptions. 8 [2] Thomas M Cover and Joy A Thomas. Elements of information theory. Wiley-Interscience, 2006. [3] Satosi Watanabe. Information theoretical analysis of multivariate correlation. IBM Journal of research and development, 4(1):66–82, 1960. [4] M Studen`y and J Vejnarova. The multiinformation function as a tool for measuring stochastic dependence. In Learning in graphical models, pages 261–297. Springer, 1998. [5] Alexander Kraskov, Harald St¨ogbauer, Ralph G Andrzejak, and Peter Grassberger. Hierarchical clustering using mutual information. EPL (Europhysics Letters), 70(2):278, 2005. [6] J. Pearl. Causality: Models, Reasoning and Inference. Cambridge University Press, NY, NY, USA, 2009. [7] Elad Schneidman, William Bialek, and Michael J Berry. Synergy, redundancy, and independence in population codes. the Journal of Neuroscience, 23(37):11539–11553, 2003. [8] Greg Ver Steeg and Aram Galstyan. Maximally informative hierarchical representations of highdimensional data. arXiv:1410.7404, 2014. [9] Animashree Anandkumar, Kamalika Chaudhuri, Daniel Hsu, Sham M Kakade, Le Song, and Tong Zhang. Spectral methods for learning multivariate latent tree structure. In NIPS, pages 2025–2033, 2011. [10] Myung Jin Choi, Vincent YF Tan, Animashree Anandkumar, and Alan S Willsky. Learning latent tree graphical models. The Journal of Machine Learning Research, 12:1771–1812, 2011. [11] Lewis R Goldberg. The development of markers for the big-five factor structure. Psychological assessment, 4(1):26, 1992. [12] Aapo Hyv¨arinen and Erkki Oja. Independent component analysis: algorithms and applications. Neural networks, 13(4):411–430, 2000. [13] N.A. Rosenberg, J.k. Pritchard, J.L. Weber, H.M. Cann, K.K. Kidd, L.A. Zhivotovsky, and M.W. Feldman. Genetic structure of human populations. Science, 298(5602):2381–2385, 2002. [14] K. Bache and M. Lichman. UCI machine learning repository, 2013. [15] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv:1301.3781, 2013. [16] Jonathan Chang, Jordan L Boyd-Graber, Sean Gerrish, Chong Wang, and David M Blei. Reading tea leaves: How humans interpret topic models. In NIPS, volume 22, pages 288–296, 2009. [17] Elad Schneidman, Susanne Still, Michael J Berry, William Bialek, et al. Network information and connected correlations. Physical Review Letters, 91(23):238701, 2003. [18] Nihat Ay, Eckehard Olbrich, Nils Bertschinger, and J¨urgen Jost. A unifying framework for complexity measures of finite systems. Proceedings of European Complex Systems Society, 2006. [19] P.L. Williams and R.D. Beer. Nonnegative decomposition of multivariate information. arXiv:1004.2515, 2010. [20] Virgil Griffith and Christof Koch. Quantifying synergistic mutual information. arXiv:1205.4265, 2012. [21] Gowtham Ramani Kumar, Cheuk Ting Li, and Abbas El Gamal. Exact common information. arXiv:1402.0062, 2014. [22] B. Steudel and N. Ay. Information-theoretic inference of common ancestors. arXiv:1010.5720, 2010. [23] Naftali Tishby, Fernando C Pereira, and William Bialek. The information bottleneck method. arXiv:physics/0004057, 2000. [24] Noam Slonim, Nir Friedman, and Naftali Tishby. Multivariate information bottleneck. Neural Computation, 18(8):1739–1789, 2006. [25] Noam Slonim. The information bottleneck: Theory and applications. PhD thesis, Citeseer, 2002. [26] Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 35(8):1798–1828, 2013. [27] Geoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504–507, 2006. [28] Yann LeCun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. [29] Yann LeCun and Yoshua Bengio. Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks, 3361, 1995. [30] Yoshua Bengio, Pascal Lamblin, Dan Popovici, and Hugo Larochelle. Greedy layer-wise training of deep networks. Advances in neural information processing systems, 19:153, 2007. [31] Rapha¨el Mourad, Christine Sinoquet, Nevin L Zhang, Tengfei Liu, Philippe Leray, et al. A survey on latent tree models and applications. J. Artif. Intell. Res.(JAIR), 47:157–203, 2013. [32] Ryan Prescott Adams, Hanna M Wallach, and Zoubin Ghahramani. Learning the structure of deep sparse graphical models. arXiv:1001.0160, 2009. [33] H. Lee, R. Grosse, R. Ranganath, and A. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In ICML, 2009. [34] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011. 9
|
2014
|
121
|
5,205
|
Difference of Convex Functions Programming for Reinforcement Learning Bilal Piot1,2, Matthieu Geist1, Olivier Pietquin2,3 1MaLIS research group (SUPELEC) - UMI 2958 (GeorgiaTech-CNRS), France 2LIFL (UMR 8022 CNRS/Lille 1) - SequeL team, Lille, France 3 University Lille 1 - IUF (Institut Universitaire de France), France bilal.piot@lifl.fr, matthieu.geist@supelec.fr, olivier.pietquin@univ-lille1.fr Abstract Large Markov Decision Processes are usually solved using Approximate Dynamic Programming methods such as Approximate Value Iteration or Approximate Policy Iteration. The main contribution of this paper is to show that, alternatively, the optimal state-action value function can be estimated using Difference of Convex functions (DC) Programming. To do so, we study the minimization of a norm of the Optimal Bellman Residual (OBR) T ∗Q −Q, where T ∗is the so-called optimal Bellman operator. Controlling this residual allows controlling the distance to the optimal action-value function, and we show that minimizing an empirical norm of the OBR is consistant in the Vapnik sense. Finally, we frame this optimization problem as a DC program. That allows envisioning using the large related literature on DC Programming to address the Reinforcement Leaning problem. 1 Introduction This paper addresses the problem of solving large state-space Markov Decision Processes (MDPs)[16] in an infinite time horizon and discounted reward setting. The classical methods to tackle this problem, such as Approximate Value Iteration (AVI) or Approximate Policy Iteration (API) [6, 16]1, are derived from Dynamic Programming (DP). Here, we propose an alternative path. The idea is to search directly a function Q for which T ∗Q ≈Q, where T ∗is the optimal Bellman operator, by minimizing a norm of the Optimal Bellman Residual (OBR) T ∗Q−Q. First, in Sec. 2.2, we show that the OBR Minimization (OBRM) is interesting, as it can serve as a proxy for the optimal action-value function estimation. Then, in Sec. 3, we prove that minimizing an empirical norm of the OBR is consistant in the Vapnick sense (this justifies working with sampled transitions). However, this empirical norm of the OBR is not convex. We hypothesize that this is why this approach is not studied in the literature (as far as we know), a notable exception being the work of Baird [5]. Therefore, our main contribution, presented in Sec. 4, is to show that this minimization can be framed as a minimization of a Difference of Convex functions (DC) [11]. Thus, a large literature on Difference of Convex functions Algorithms (DCA) [19, 20](a rather standard approach to non-convex programming) is available to solve our problem. Finally in Sec. 5, we conduct a generic experiment that compares a naive implementation of our approach to API and AVI methods, showing that it is competitive. 1Others methods such as Approximate Linear Programming (ALP) [7, 8] or Dynamic Policy Programming (DPP) [4] address the same problem. Yet, they also rely on DP. 1 2 Background 2.1 MDP and ADP Before describing the framework of MDPs in the infinite-time horizon and discounted reward setting, we give some general notations. Let (R, |.|) be the real space with its canonical norm and X a finite set, RX is the set of functions from X to R. The set of probability distributions over X is noted ∆X. Let Y be a finite set, ∆Y X is the set of functions from Y to ∆X. Let α ∈RX, p ≥1 and ν ∈∆X, we define the Lp,ν-semi-norm of α, noted ∥α∥p,ν, by: ∥α∥p,ν = (P x∈X ν(x)|α(x)|p) 1 p . In addition, the infinite norm is noted ∥α∥∞and defined as ∥α∥∞= maxx∈X |α(x)|. Let v be a random variable which takes its values in X, v ∼ν means that the probability that v = x is ν(x). Now, we provide a brief summary of some of the concepts from the theory of MDP and ADP [16]. Here, the agent is supposed to act in a finite MDP 2 represented by a tuple M = {S, A, R, P, γ} where S = {si}1≤i≤NS is the state space, A = {ai}1≤i≤NA is the action space, R ∈RS×A is the reward function, γ ∈]0, 1[ is a discount factor and P ∈∆S×A S is the Markovian dynamics which gives the probability, P(s′|s, a), to reach s′ by choosing action a in state s. A policy π is an element of AS and defines the behavior of an agent. The quality of a policy π is defined by the action-value function. For a given policy π, the action-value function Qπ ∈RS×A is defined as Qπ(s, a) = Eπ[P+∞ t=0 γtR(st, at)], where Eπ is the expectation over the distribution of the admissible trajectories (s0, a0, s1, π(s1), . . . ) obtained by executing the policy π starting from s0 = s and a0 = a. Moreover, the function Q∗∈RS×A defined as Q∗= maxπ∈AS Qπ is called the optimal action-value function. A policy π is optimal if ∀s ∈S, Qπ(s, π(s)) = Q∗(s, π(s)). A policy π is said greedy with respect to a function Q if ∀s ∈S, π(s) ∈argmaxa∈A Q(s, a). Greedy policies are important because a policy π greedy with respect to Q∗is optimal. In addition, as we work in the finite MDP setting, we define, for each policy π, the matrix Pπ of size NSNA × NSNA with elements Pπ((s, a), (s′, a′)) = P(s′|s, a)1{π(s′)=a′}. Let ν ∈∆S×A, we note νPπ ∈∆S×A the distribution such that (νPπ)(s, a) = P (s′,a′)∈S×A ν(s′, a′)Pπ((s′, a′), (s, a)). Finally, Qπ and Q∗are known to be fixed points of the contracting operators T π and T ∗respectively: ∀Q ∈RS×A, ∀(s, a) ∈S × A, T πQ(s, a) = R(s, a) + γ X s′∈S P(s′|s, a)Q(s, π(s′)), ∀Q ∈RS×A, ∀(s, a) ∈S × A, T ∗Q(s, a) = R(s, a) + γ X s′∈S P(s′|s, a) max b∈A Q(s, b). When the state space becomes large, two important problems arise to solve large MDPs. The first one, called the representation problem, is that an exact representation of the values of the action-value functions is impossible, so these functions need to be represented with a moderate number of coefficients. The second problem, called the sample problem, is that there is no direct access to the Bellman operators but only samples from them. One solution for the representation problem is to linearly parameterize the action-value functions thanks to a basis of d ∈N∗functions φ = (φi)d i=1 where φi ∈RS×A. In addition, we define for each state-action couple (s, a) the vector φ(s, a) ∈Rd such that φ(s, a) = (φi(s, a))d i=1. Thus, the action-value functions are characterized by a vector θ ∈Rd and noted Qθ : ∀θ ∈Rd, ∀(s, a) ∈S × A, Qθ(s, a) = d X i=1 θiφi(s, a) = ⟨θ, φ(s, a)⟩, where ⟨., .⟩is the canonical dot product of Rd. The usual framework to solve large MDPs are for instance AVI and API. AVI consists in processing a sequence (QAVI θn )n∈N where θ0 ∈Rd and ∀n ∈N, QAVI θn+1 ≈T ∗QAVI θn . API consists in processing two sequences (QAPI θn )n∈N and (πAPI n )n∈N where πAPI 0 ∈AS, ∀n ∈N, QAPI θn ≈ 2This work could be easily extended to measurable state spaces as in [9]; we choose the finite case for the ease and clarity of exposition. 2 T πnQAPI θn and πAPI n+1 is greedy with respect to QAPI θn . The approximation steps in AVI and API generate the sequences of errors (ϵAVI n = T ∗QAVI θn −QAVI θn+1)n∈N and (ϵAPI n = T πnQAPI θn − QAPI θn )n∈N respectively. Those approximation errors are due to both the representation and the sample problems and can be made explicit for specific implementations of those methods [14, 1]. These ALP methods are legitimated by the following bound [15, 9]: lim sup n→∞∥Q∗−QπAPI\AVI n ∥p,ν ≤ 2γ (1 −γ)2 C2(ν, µ) 1 p ϵAPI\AVI, (1) where πAPI\AVI n is greedy with respect to QAPI\AVI θn , ϵAPI\AVI = supn∈N ∥ϵAPI\AVI n ∥p,µ and C2(ν, µ) is a second order concentrability coefficient, C2(ν, µ) = (1 −γ) P m≥1 mγm−1c(m), where c(m) = maxπ1,...,πm,(s,a)∈S×A (νPπ1Pπ2...Pπm)(s,a) µ(s,a) . In the next section, we compare the bound Eq. (1) with a similar bound derived from the OBR minimization approach in order to justify it. 2.2 Why minimizing the OBR? The aim of Dynamic Programming (DP) is, given an MDP M, to find Q∗which is equivalent to minimizing a certain norm of the OBR Jp,µ(Q) = ∥T ∗Q−Q∥p,µ where µ ∈∆S×A is such that ∀(s, a) ∈S × A, µ(s, a) > 0 and p ≥1. Indeed, it is trivial to verify that the only minimizer of Jp,µ is Q∗. Moreover, we have the following bound given by Th. 1. Theorem 1. Let ν ∈∆S×A, µ ∈∆S×A, ˆπ ∈AS and C1(ν, µ, ˆπ) ∈[1, +∞[∪{+∞} the smallest constant verifying (1 −γ)ν P t≥0 γtP t ˆπ ≤C1(ν, µ, ˆπ)µ, then: ∀Q ∈RS×A, ∥Q∗−Qπ∥p,ν ≤ 2 1 −γ C1(ν, µ, π) + C1(ν, µ, π∗) 2 1 p ∥T ∗Q −Q∥p,µ, (2) where π is greedy with respect to Q and π∗is any optimal policy. Proof. A proof is given in the supplementary file. Similar results exist [15]. In Reinforcement Leaning (RL), because of the representation and the sample problems, minimizing ∥T ∗Q −Q∥p,µ over RS×A is not possible (see Sec. 3 for details), but we can consider that our approach provides us a function Q such that T ∗Q ≈Q and define the error ϵOBRM = ∥T ∗Q −Q∥p,µ. Thus, via Eq. (2), we have: ∥Q∗−Qπ∥p,ν ≤ 2 1 −γ C1(ν, µ, π) + C1(ν, µ, π∗) 2 1 p ϵOBRM, (3) where π is greedy with respect to Q. This bound has the same form as the one of API and AVI described in Eq. (1) and the Tab. 1 allows comparing them. This bound has two Algorithms Horizon term Concentrability term Error term API\AVI 2γ (1−γ)2 C2(ν, µ) ϵAPI\AVI OBRM 2 1−γ C1(ν,µ,π)+C1(ν,µ,π∗) 2 ϵOBRM Table 1: Bounds comparison. advantages over API\AVI. First, the horizon term 2 1−γ is better than the horizon term 2γ (1−γ)2 as long as γ > 0.5, which is the usual case. Second, the concentrability term C1(ν,µ,π)+C1(ν,µ,π∗) 2 is considered better that C2(ν, µ), mainly because if C2(ν, µ) < +∞ then C1(ν,µ,π)+C1(ν,µ,π∗) 2 < +∞, the contrary being not true (see [17] for a discussion about the comparison of these concentrability coefficients). Thus, the bound Eq. (3) justifies the minimization of a norm of the OBR, as long as we are able to control the error term ϵOBRM. 3 3 Vapnik-Consistency of the empirical norm of the OBR When the state space is too large, it is not possible to minimize directly ∥T ∗Q −Q∥p,µ, as we need to compute T ∗Q(s, a) for each couple (s, a) (sample problem). However, we can consider the case where we choose N samples represented by N independent and identically distributed random variables (Si, Ai)1≤i≤N such that (Si, Ai) ∼µ and minimize ∥T ∗Q −Q∥p,µN where µN is the empirical distribution µN(s, a) = 1 N PN i=1 1{(Si,Ai)=(s,a)}. An important question (answered below) is to know if controlling the empirical norm allows controlling the true norm of interest (consistency in the Vapnik sense [22]), and at what rate convergence occurs. Computing ∥T ∗Q −Q∥p,µN = ( 1 N PN i=1 |T ∗Q(Si, Ai) −Q(Si, Ai)|p) 1 p is tractable if we consider that we can compute T ∗Q(Si, Ai) which means that we have a perfect knowledge of the dynamics P and that the number of next states for the state-action couple (Si, Ai) is not too large. In Sec. 4.3, we propose different solutions to evaluate T ∗Q(Si, Ai) when the number of next states is too large or when the dynamics is not provided. Now, the natural question is to what extent minimizing ∥T ∗Q −Q∥p,µN corresponds to minimizing ∥T ∗Q −Q∥p,µ. In addition, we cannot minimize ∥T ∗Q −Q∥p,µN over RS×A as this space is too large (representation problem) but over the space {Qθ ∈RS×A, θ ∈Rd}. Moreover, as we are looking for a function such that Qθ = Q∗, we can limit our search to the functions satisfying ∥Qθ∥∞≤∥R∥∞ 1−γ . Thus, we search for a function Q in the hypothesis space Q = {Qθ ∈RS×A, θ ∈Rd, ∥Qθ∥∞≤∥R∥∞ 1−γ }, in order to minimize ∥T ∗Q −Q∥p,µN . Let QN ∈argminQ∈Q ∥T ∗Q −Q∥p,µN be a minimizer of the empirical norm of the OBR, we want to know to what extent the empirical error ∥T ∗QN −QN∥p,µN is related to the real error ϵOBRM = ∥T ∗QN −QN∥p,µ. The answer for deterministic-finite MPDs relies in Th. 2 (the continuous-stochastic MDP case being discussed shortly after). Theorem 2. Let η ∈]0, 1[ and M be a finite deterministic MDP, with probability at least 1 −η, we have: ∀Q ∈Q, ∥T ∗Q −Q∥p p,µ ≤∥T ∗Q −Q∥p p,µN + 2∥R∥∞ 1 −γ p ε(N), where ε(N) = h(ln( 2N h )+1)+ln( 4 η ) N and h = 2NA(d + 1). With probability at least 1 −2η: ϵOBRM = ∥T ∗QN −QN∥p p,µ ≤ϵB + 2∥R∥∞ 1 −γ p ε(N) + r ln(1/η) 2N ! , where ϵB = minQ∈Q ∥T ∗Q −Q∥p p,µ is the error due to the choice of features. Proof. The complete proof is provided in the supplementary file. It mainly consists in computing the Vapnik-Chervonenkis dimension of the residual. Thus, if we were able to compute a function such as QN, we would have, thanks to Eq .(2) and Th. 2: ∥Q∗−QπN ∥p,ν ≤ C1(ν, µ, πN) + C1(ν, µ, π∗) 1 −γ 1 p ϵB + 2∥R∥∞ 1 −γ p ε(N) + r ln(1/η) 2N !! 1 p . where πN is greedy with respect to QN. The error term ϵOBRM is explicitly controlled by two terms ϵB, a term of bias, and 2∥R∥∞ 1−γ p ε(N) + q ln(1/η) 2N a term of variance. The term ϵB = minQ∈Q ∥T ∗Q −Q∥p p,µ is relative to the representation problem and is fixed by the choice of features. The term of variance is decreasing at the speed q 1 N . A similar bound can be obtained for non-deterministic continuous-state MDPs with finite number of actions where the state space is a compact set in a metric space, the features 4 (φi)d i=1 are Lipschitz and for each state-action couple the next states belongs to a ball of fixed radius. The proof is a simple extension of the one given in the supplementary material. Those continuous MDPs are representative of real dynamical systems. Now that we know that minimizing ∥T ∗Q −Q∥p p,µN allows controlling ∥Q∗−QπN ∥p,ν, the question is how do we frame this optimization problem. Indeed ∥T ∗Q −Q∥p p,µN is a non-convex and a nondifferentiable function with respect to Q, thus a direct minimization could lead us to bad solutions. In the next section, we propose a method to alleviate those difficulties. 4 Reduction to a DC problem Here, we frame the minimization of the empirical norm of the OBR as a DC problem and instantiate a general algorithm, DCA [20], that tries to solve it. First, we provide a short introduction to difference of convex functions. 4.1 DC background Let E be a finite dimensional Hilbert space and ⟨., .⟩E, ∥.∥E its dot product and norm respectively. We say that a function f ∈RE is DC if there exists g, h ∈RE which are convex and lower semi-continuous such that f = g −h. The set of DC functions is noted DC(E) and is stable to most of the operations that can be encountered in optimization, contrary to the set of convex functions. Indeed, let (fi)K i=1 be a sequence of K ∈N∗DC functions and (αi)K i=1 ∈RK then PK i=1 αifi, QK i=1 fi, min1≤i≤K fi, max1≤i≤K fi and |fi| are DC functions [11]. In order to minimize a DC function f = g −h, we need to define a notion of differentiability for convex and lower semi-continuous functions. Let g be such a function and e ∈E, we define the sub-gradient ∂eg of g in e as: ∂eg = {δ ∈E, ∀e′ ∈E, g(e′) ≥g(e) + ⟨e′ −e, δ⟩E}. For a convex and lower semi-continuous g ∈RE, the sub-gradient ∂eg is non empty for all e ∈E [11]. This observation leads to a minimization method of a function f ∈DC(E) called Difference of Convex functions Algorithm (DCA). Indeed, as f is DC, we have: ∀(e, e′) ∈E2, f(e′) = g(e′) −h(e′) ≤ (a) g(e′) −h(e) −⟨e′ −e, δ⟩E, where δ ∈∂eh and inequality (a) is true by definition of the sub-gradient. Thus, for all e ∈E, the function f is upper bounded by a function fe ∈RE defined for all e′ ∈E by fe(e′) = g(e′) −h(e) −⟨e′ −e, δ⟩E. The function fe is a convex and lower semi-continuous function (as it is the sum of two convex and lower semi-continuous functions which are g and the linear function ∀e′ ∈E, ⟨e −e′, δ⟩E −h(e)). In addition, those functions have the particular property that ∀e ∈E, f(e) = fe(e). The set of convex functions (fe)e∈E that upper-bound the function f plays a key role in DCA. The algorithm DCA [20] consists in constructing a sequence (en)n∈N such that the sequence (f(en))n∈N decreases. The first step is to choose a starting point e0 ∈E, then we minimize the convex function fe0 that upper-bounds the function f. We note e1 a minimizer of fe0, e1 ∈argmine∈E fe0. This minimization can be realized by any convex optimization solver. As f(e0) = fe0(e0) ≥fe0(e1) and fe0(e1) ≥f(e1), then f(e0) ≥f(e1). Thus, if we construct the sequence (en)n∈N such that ∀n ∈N, en+1 ∈argmine∈E fen and e0 ∈E, then we obtain a decreasing sequence (f(en))n∈N. Therefore, the algorithm DCA solves a sequence of convex optimization problems in order to solve a DC optimization problem. Three important choices can radically change the DCA performance: the first one is the explicit choice of the decomposition of f, the second one is the choice of the starting point e0 and finally the choice of the intermediate convex solver. The DCA algorithm hardly guarantee convergence to the global optima, but it usually provides good solutions. Moreover, it has some nice properties when one of the functions g or h is polyhedral. A function g is said polyhedral when ∀e ∈E, g(e) = max1≤i≤K[⟨αi, e⟩H + βi], where (αi)K i=1 ∈EK and (βi)K i=1 ∈RK. If one of the function g, h is polyhedral, f is under bounded and the DCA sequence (en)n∈N is bounded, the DCA algorithm converges in finite time to a local minima. The finite time aspect is quite interesting in term of implementation. More details about DC programming and DCA are given in [20] and even conditions for convergence to the global optima. 5 4.2 The OBR minimization framed as a DC problem A first important result is that for any choice of p ≥1, the OBRM is actually a DC problem. Theorem 3. Let Jp p,µN (θ) = ∥T ∗Qθ −Qθ∥p,µN be a function from Rd to reals, Jp p,µN (θ) is a DC functions when p ∈N∗. Proof. Let us write Jp p,µN as: Jp p,µN (θ) = 1 N N X i=1 |⟨φ(Si, Ai), θ⟩−R(Si, Ai) −γ X s′∈S P(s′|Si, Ai) max a∈A ⟨φ(s′, a), θ⟩|p. First, as for each (Si, Ai) the linear function ⟨φ(Si, Ai), .⟩is convex and continuous, the affine function gi = ⟨φ(Si, Ai), .⟩+ R(Si, Ai) is convex and continuous. Therefore, the function maxa∈A⟨φ(s′, a), .⟩is also convex and continuous as a finite maximum of convex and continuous functions. In addition, the function hi = γ P s′∈S P(s′|Si, Ai) maxa∈A⟨φ(s′, a), .⟩| is convex and continuous as a positively weighted finite sum of convex and continuous functions. Thus, the function fi = gi −hi is a DC function. As an absolute value of a DC function is DC, a finite product of DC functions is DC and a weighted sum of DC functions is DC, then Jp p,µN = 1 N PN i=1 |fi|p is a DC function. However, knowing that Jp p,µN is DC is not sufficient in order to use the DCA algorithm. Indeed, we need an explicit decomposition of Jp p,µN as a difference of two convex functions. We present two polyhedral explicit decompositions of Jp p,µN when p = 1 and when p = 2. Theorem 4. There exists explicit polyhedral decompositions of Jp p,µN when p = 1 and p = 2. For p = 1: J1,µN = G1,µN −H1,µN , where G1,µN = 1 N PN i=1 2 max(gi, hi) and H1,µN = 1 N PN i=1(gi + hi), with gi = ⟨φ(Si, Ai), .⟩+ R(Si, Ai) and hi = γ P s′∈S P(s′|Si, Ai) maxa∈A⟨φ(s′, a), .⟩. For p = 2: J2 2,µN = G2,µN −H2,µN , where G2,µN = 1 N PN i=1[g2 i + h 2 i ] and H2,µN = 1 N PN i=1[gi + hi]2 with: gi = max(gi, hi) + gi − ⟨φ(Si, Ai) + γ X s′∈S P(s′|Si, Ai)φ(s′, a1), .⟩−R(Si, Ai) ! , hi = max(gi, hi) + hi − ⟨φ(Si, Ai) + γ X s′∈S P(s′|Si, Ai)φ(s′, a1), .⟩−R(Si, Ai) ! . Proof. The proof is provided in the supplementary material. Unfortunately, there is currently no guarantee that DCA applied to Jp p,µN = Gp,µN −Hp,µN outputs QN ∈argminQ∈Q ∥T ∗Q −Q∥p,µN . The error between the output ˆQN of DCA and QN is not studied here but it is a nice theoretical perspective for future works. 4.3 The batch scenario Previously, we admit that it was possible to calculate T ∗Q(s, a) = R(s, a) + γ P s′∈S P(s′|s, a) maxb∈A Q(s′, b). However, if the number of next states s′ for a given couple (s, a) is too large or if T ∗is unknown, this can be intractable. A solution, when we have a simulator, is to generate for each couple (Si, Ai) a set of N ′ samples (S′ i,j)N′ j=1 and provide a non-biased estimation of T ∗Q(Si, Ai): ˆT ∗Q(Si, Ai) = R(Si, Ai) + γ 1 N′ PN′ j=1 maxa∈A Q(S′ i,j, a). Even if | ˆT ∗Q(Si, ai) −Q(Si, Ai)|p is a biased estimator of |T ∗Q(Si, Ai) −Q(Si, Ai)|p, this biais can be controlled by the number of samples N ′. In the case where we do not have such a simulator, but only sampled transitions (Si, Ai, S′ i)N i=1 (the batch scenario), it is possible to provide a non-biased estimation of 6 T ∗Q(Si, Ai) via: ˆT ∗Q(Si, Ai) = R(Si, Ai) + γ maxb∈A Q(S′ i, b). However in that case, | ˆT ∗Q(Si, Ai) −Q(Si, Ai)|p is a biased estimator of |T ∗Q(Si, Ai) −Q(Si, Ai)|p and the biais is uncontrolled [2]. In order to alleviate this typical problem from the batch scenario, several techniques have been proposed in the literature to provide a better estimator | ˆT ∗Q(Si, Ai) −Q(Si, Ai)|p, such as embeddings in Reproducing Kernel Hilbert Spaces (RKHS)[13] or locally weighted averager such as Nadaraya-Watson estimators[21]. In both cases, the non-biased estimation of T ∗Q(Si, Ai) takes the form ˆT ∗Q(Si, Ai) = R(Si, Ai) + γ 1 N PN j=1 βi(S′ j) maxa∈A Q(S′ j, a), where βi(S′ j) represents the weight of the samples S′ j in the estimation of T ∗Q(Si, Ai). To obtain an explicit DC decomposition, when p = 1 or p = 2, of ˆJp p,µN (θ) = 1 N PN i=1 | ˆT ∗Qθ(Si, Ai) −Qθ(Si, Ai)|p it is sufficient to replace P s′∈S P(s′|Si, Ai) maxa∈A⟨φ(s′, a), θ⟩by 1 N PN j=1 βi(S′ j) maxa∈A Q(S′ j, a) (or 1 N′ PN ′ j=1 maxa∈A Q(S′ i,j, a) if we have a simulator) in the DC decomposition of Jp p,µN . 5 Illustration This experiment focuses on stationary Garnet problems, which are a class of randomly constructed finite MDPs representative of the kind of finite MDPs that might be encountered in practice [3]. A stationary Garnet problem is characterized by 3 parameters: Garnet(NS, NA, NB). The parameters NS and NA are the number of states and actions respectively, and NB is a branching factor specifying the number of next states for each state-action pair. Here, we choose a particular type of Garnets which presents a topological structure relative to real dynamical systems and aims at simulating the behavior of a smooth continuous-state MDPs (as described in Sec. 3). Those systems are generally multi-dimensional state spaces MDPs where an action leads to different next states close to each other. The fact that an action leads to close next states can model the noise in a real system for instance. Thus, problems such as the highway simulator [12], the mountain car or the inverted pendulum (possibly discretized) are particular cases of this type of Garnets. For those particular Garnets, the state space is composed of d dimensions (d = 2 in this particular experiment) and each dimension i has a finite number of elements xi (xi = 10). So, a state s = [s1, s2, .., si, .., sd] is a d-uple where each composent si can take a finite value between 1 and xi. In addition, the distance between two states s, s′ is ∥s −s′∥2 = Pi=d i=1(si −s′i)2. Thus, we obtain MDPs with a state space size of Qd i=1 xi. The number of actions is NA = 5. For each state action couple (s, a), we choose randomly NB next states (NB = 5) via a Gaussian distribution of d dimensions centered in s where the covariance matrix is the identity matrix of size d, Id, multiply by a term σ (here σ = 1). This allows handling the smoothness of the MDP: if σ is small the next states s′ are close to s and if σ is large, the next states s′ can be very far form each other and also from s. The probability of going to each next state s′ is generated by partitioning the unit interval at NB −1 cut points selected randomly. For each couple (s, a), the reward R(s, a) is drawn uniformly between −1 and 1. For each Garnet problem, it is possible to compute an optimal policy π∗thanks to the policy iteration algorithm. In this experiment, we construct 50 Garnets {Gp}1≤p≤50 as explained before. For each Garnet Gp, we build 10 data sets {Dp,q}1≤q≤10 composed of N sampled transitions (si, ai, s′ i)N i=1 drawn uniformly and independently. Thus, we are in the batch scenario. The minimization of J1,N and J2,N via the DCA algorithms, where the estimation of T ∗Q(si, ai) is done via R(si, ai) + γ maxb∈A Q(s′ i, b) (so uncontrolled biais), are called DCA1 and DCA2 respectively. The initialisation of DCA is θ0 = 0 and the intermediary optimization convex problems are solved by a sub-gradient descent [18]. Those two algorithms are compared with state-of the art Reinforcement Learning algorithms which are LSPI (API implementation) and Fitted-Q (AVI implementation). The four algorithms uses the tabular basis. Each algorithm outputs a function Qp,q A ∈RS×A and the policy associated to Qp,q A is πp,q A (s) = argmaxa∈A Qp,q A (s, a). In order to quantify the performance of a given algorithm, we calculate the criterion T p,q A = Eρ[V π∗−V πp,q A ] Eρ[|V π∗|] , where V πp,q A is computed via the policy evaluation algorithm. The mean performance criterion TA is 1 500 P50 p=1 P10 q=1 T p,q A . We also 7 calculate, for each algorithm, the variance criterion stdp A = 1 10 P10 q=1(T p,q A −1 10 P10 q=1 T p,q A )2 and the resulting mean variance criterion is stdA = 1 50 P50 p=1 stdp A. In Fig. 1(a), we plot the performance versus the number of samples. We observe that the 4 algorithms have similar performances, which shows that our alternative approach is competitive. In Fig. 1(b), we 0 200 400 600 800 1000 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 Number of samples Performance LSPI DCA1 DCA2 rand Fitted−Q (a) Performance 0 200 400 600 800 1000 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 Number of samples Standard deviation LSPI DCA1 DCA2 Fitted−Q (b) Standard deviation Figure 1: Garnet Experiment plot the standard deviation versus the number of samples. Here, we observe that DCA algorithms have less variance which is an advantage. This experiment shows us that DC programming is relevant for RL but still has to prove its efficiency on real problems. 6 Conclusion and Perspectives In this paper, we presented an alternative approach to tackle the problem of solving large MDPs by estimating the optimal action-value function via DC Programming. To do so, we first showed that minimizing a norm of the OBR is interesting. Then, we proved that the empirical norm of the OBR is consistant in the Vapnick sense (strict consistency). Finally, we framed the minimization of the empirical norm as DC minimization which allows us to rely on the literature on DCA. We conduct a generic experiment with a basic setting for DCA as we choose a canonical explicit decomposition of our DC functions criterion and a sub-gradient descent in order to minimize the intermediary convex minimization problems. We obtain similar results to AVI and API. Thus, an interesting perspective would be to have a less naive setting for DCA by choosing different explicit decompositions and find a better convex solver for the intermediary convex minimizations problems. Another interesting perspective is that our approach can be non-parametric. Indeed, as pointed in [10] a convex minimization problem can be solved via boosting techniques which avoids the choice of features. Therefore, each intermediary convex problem of DCA could be solved via a boosting technique and hence make DCA non-parametric. Thus, seeing the RL problem as a DC problem provides some interesting perspectives for future works. Acknowledgements The research leading to these results has received partial funding from the European Union Seventh Framework Program (FP7/2007-2013) under grant agreement number 270780 and the ANR ContInt program (MaRDi project, number ANR- 12-CORD-021 01). We also would like to thank professors Le Thi Hoai An and Pham Dinh Tao for helpful discussions about DC programming. 8 References [1] A. Antos, R. Munos, and C. Szepesv´ari. Fitted-Q iteration in continuous action-space MDPs. In Proc. of NIPS, 2007. [2] A. Antos, C. Szepesv´ari, and R. Munos. Learning near-optimal policies with Bellmanresidual minimization based fitted policy iteration and a single sample path. Machine Learning, 2008. [3] T. Archibald, K. McKinnon, and L. Thomas. On the generation of Markov decision processes. Journal of the Operational Research Society, 1995. [4] M.G. Azar, V. G´omez, and H.J Kappen. Dynamic policy programming. The Journal of Machine Learning Research, 13(1), 2012. [5] L. Baird. Residual algorithms: reinforcement learning with function approximation. In Proc. of ICML, 1995. [6] D.P. Bertsekas. Dynamic programming and optimal control, volume 1. Athena Scientific, Belmont, MA, 1995. [7] D.P. de Farias and B. Van Roy. The linear programming approach to approximate dynamic programming. Operations Research, 51, 2003. [8] Vijay Desai, Vivek Farias, and Ciamac C Moallemi. A smoothed approximate linear program. In Proc. of NIPS, pages 459–467, 2009. [9] A. Farahmand, R. Munos, and Csaba. Szepesv´ari. Error propagation for approximate policy and value iteration. Proc. of NIPS, 2010. [10] A. Grubb and J.A. Bagnell. Generalized boosting algorithms for convex optimization. In Proc. of ICML, 2011. [11] J.B Hiriart-Urruty. Generalized differentiability, duality and optimization for problems dealing with differences of convex functions. In Convexity and duality in optimization. Springer, 1985. [12] E. Klein, M. Geist, B. Piot, and O. Pietquin. Inverse reinforcement learning through structured classification. In Proc. of NIPS, 2012. [13] G. Lever, L. Baldassarre, A. Gretton, M. Pontil, and S. Gr¨unew¨alder. Modelling transition dynamics in MDPs with RKHS embeddings. In Proc. of ICML, 2012. [14] O. Maillard, R. Munos, A. Lazaric, and M. Ghavamzadeh. Finite-sample analysis of Bellman residual minimization. In Proc. of ACML, 2010. [15] R. Munos. Performance bounds in Lp-norm for approximate value iteration. SIAM journal on control and optimization, 2007. [16] M.L. Puterman. Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons, 1994. [17] B. Scherrer. Approximate policy iteration schemes: a comparison. In Proc. of ICML, 2014. [18] N.Z. Shor, K.C. Kiwiel, and A. Ruszcaynski. Minimization methods for nondifferentiable functions. Springer-Verlag, 1985. [19] P.D. Tao and L.T.H. An. Convex analysis approach to DC programming: theory, algorithms and applications. Acta Mathematica Vietnamica, 22:289–355, 1997. [20] P.D. Tao and L.T.H. An. The DC programming and DCA revisited with DC models of real world nonconvex optimization problems. Annals of Operations Research, 133:23– 46, 2005. [21] G. Taylor and R. Parr. Value function approximation in noisy environments using locally smoothed regularized approximate linear programs. In Proc. of UAI, 2012. [22] V. Vapnik. Statistical learning theory. Wiley, 1998. 9
|
2014
|
122
|
5,206
|
Local Linear Convergence of Forward–Backward under Partial Smoothness Jingwei Liang and Jalal M. Fadili GREYC, CNRS-ENSICAEN-Univ. Caen {Jingwei.Liang,Jalal.Fadili}@greyc.ensicaen.fr Gabriel Peyré CEREMADE, CNRS-Univ. Paris-Dauphine Gabriel.Peyre@ceremade.dauphine.fr Abstract In this paper, we consider the Forward–Backward proximal splitting algorithm to minimize the sum of two proper closed convex functions, one of which having a Lipschitz continuous gradient and the other being partly smooth relative to an active manifold M. We propose a generic framework under which we show that the Forward–Backward (i) correctly identifies the active manifold M in a finite number of iterations, and then (ii) enters a local linear convergence regime that we characterize precisely. This gives a grounded and unified explanation to the typical behaviour that has been observed numerically for many problems encompassed in our framework, including the Lasso, the group Lasso, the fused Lasso and the nuclear norm regularization to name a few. These results may have numerous applications including in signal/image processing processing, sparse recovery and machine learning. 1 Introduction 1.1 Problem statement Convex optimization has become ubiquitous in most quantitative disciplines of science. A common trend in modern science is the increase in size of datasets, which drives the need for more efficient optimization methods. Our goal is the generic minimization of composite functions of the form min x∈Rn Φ(x) = F(x) + J(x) , (1.1) where (A.1) J : Rn →R ∪{+∞} is a proper, closed and convex function; (A.2) F is a convex and C1,1(Rn) function whose gradient is β–Lipschitz continuous; (A.3) Argmin Φ ̸= ∅. The class of problems (1.1) covers many popular non-smooth convex optimization problems encountered in various fields throughout science and engineering, including signal/image processing, machine learning and classification. For instance, taking F = 1 2λ||y −A · ||2 for some A ∈Rm×n and λ > 0, we recover the Lasso problem when J = || · ||1, the group Lasso for J = || · ||1,2, the fused Lasso for J = ||D∗· ||1 with D = [DDIF, ϵId] and DDIF is the finite difference operator, anti-sparsity regularization when J = || · ||∞, and nuclear norm regularization when J = || · ||∗. 1 The standard (non relaxed) version of Forward–Backward (FB) splitting algorithm [3] for solving (1.1) updates to a new iterate xk+1 according to xk+1 = proxγkJ xk −γk∇F(xk) , (1.2) starting from any point x0 ∈Rn, where 0 < γ ≤γk ≤γ < 2/β. Recall that the proximity operator is defined, for γ > 0, as proxγJ(x) = argminz∈Rn 1 2γ ||z −x||2 + J(z). 1.2 Contributions In this paper, we present a unified local linear convergence analysis for the FB algorithm to solve (1.1) when J is in addition partly smooth relative to a manifold M (see Definition 2.1 for details). The class of partly smooth functions is very large and encompasses all previously discussed examples as special cases. More precisely, we first show that FB has a finite identification property, meaning that after a finite number of iterations, say K, all iterates obey xk ∈M for k ≥K. Exploiting this property, we then show that after such a large enough number of iterations, xk converges locally linearly. We characterize this regime and the rates precisely depending on the structure of the active manifold M. In general, xk converges locally Q-linearly, and when M is an linear subspace, the convergence becomes R-linear. Several experimental results on some of the problems discussed above are provided to support our theoretical findings. 1.3 Related work Finite support identification and local R-linear convergence of FB to solve the Lasso problem, though in infinite-dimensional setting, is established in [4] under either a very restrictive injectivity assumption, or a non-degeneracy assumption which is a specialization of ours (see (3.1)) to the ℓ1 norm. A similar result is proved in [13], for F being a smooth convex and locally C2 function and J the ℓ1 norm, under restricted injectivity and non-degeneracy assumptions. The ℓ1 norm is a partly smooth function and hence covered by our results. [1] proved Q-linear convergence of FB to solve (1.1) for F satisfying restricted smoothness and strong convexity assumptions, and J being a so-called convex decomposable regularizer. Again, the latter is a small subclass of partly smooth functions, and their result is then covered by ours. For example, our framework covers the total variation (TV) semi-norm and ℓ∞-norm regularizers which are not decomposable. In [15, 16], the authors have shown finite identitification of active manifolds associated to partly smooth functions for various algorithms, including the (sub)gradient projection method, Newtonlike methods, the proximal point algorithm. Their work extends that of e.g. [28] on identifiable surfaces from the convex case to a general non-smooth setting. Using these results, [14] considered the algorithm [25] to solve (1.1) where J is partly smooth, but not necessarily convex and F is C2(Rn), and proved finite identitification of the active manifold. However, the convergence rate remains an open problem in all these works. 1.4 Notations Suppose M ⊂Rn is a C2-manifold around x ∈Rn, denote TM(x) the tangent space of M at x ∈Rn. The tangent model subspace is defined as Tx = Lin ∂J(x) ⊥, where Lin(C) is the linear hull of the convex set C ⊂Rn. For a linear subspace V , we denote PV the orthogonal projector onto V and for a matrix A ∈Rm×n, AV = A◦PV . Define the generalized sign vector ex = PTx ∂J(x) . For a convex set C ⊂Rn, ri(C) denotes its relative interior, i.e. the interior relative to its affine hull. 2 2 Partial smoothness In addition to (A.1), our central assumption is that J is a partly smooth function. Partial smoothness of functions is originally defined in [17]. Our definition hereafter specializes it to the case of proper closed convex functions. Definition 2.1. Let J be a proper closed convex function such that ∂J(x) ̸= ∅. J is partly smooth at x relative to a set M containing x if (1) (Smoothness) M is a C2-manifold around x and J restricted to M is C2 around x. (2) (Sharpness) The tangent space TM(x) is Tx. (3) (Continuity) The set–valued mapping ∂J is continuous at x relative to M. In the following, the class of partly smooth functions at x relative to M is denoted as PSx(M). When M is an affine manifold, then M = x + Tx, and we denote this subclass as PSAx(x + Tx). When M is a linear manifold, then M = Tx, and we denote this subclass as PSLx(Tx). Capitalizing on the results of [17], it can be shown that under mild transversality assumptions, the set of continuous convex partly smooth functions is closed under addition and pre-composition by a linear operator. Moreover, absolutely permutation-invariant convex and partly smooth functions of the singular values of a real matrix, i.e. spectral functions, are convex and partly smooth spectral functions of the matrix [10]. It then follows that all the examples discussed in Section 1, including ℓ1, ℓ1−ℓ2, ℓ∞, TV and nuclear norm regularizers, are partly smooth. In fact, the nuclear norm is partly smooth at a matrix x relative to the manifold M = x′ : rank(x′) = rank(x) . The first three regularizers are all part of the class PSLx(Tx), see Section 4 and [27] for details. We now define a subclass of partly smooth functions where the active manifold is actually a subspace and the generalized sign vector ex is locally constant. Definition 2.2. J belongs to the class PSSx(Tx) if and only if J ∈PSAx(x+Tx) or J ∈PSLx(Tx) and ex is constant near x, i.e. there exists a neighbourhood U of x such that ∀x′ ∈Tx ∩U ex′ = ex. A typical family of functions that comply with this definition is that of partly polyhedral functions [26, Section 6.5], which includes the ℓ1 and ℓ∞norms, and the TV semi-norm. 3 Local linear convergence of the FB method In this section, we state our main result on finite identification and local linear convergence of FB. Theorem 3.1. Assume that (A.1)-(A.3) hold. Suppose that the FB scheme is used to create a sequence xk which converges to x⋆∈Argmin Φ such that J ∈PSx⋆(Mx⋆), F is C2 near x⋆and −∇F(x⋆) ∈ri ∂J(x⋆) . (3.1) Then we have the following holds, (1) The FB scheme (1.2) has the finite identification property, i.e. there exists K ≥0, such that for all k ≥K, xk ∈Mx⋆. (2) Suppose moreover that ∃α > 0 such that PT ∇2F(x⋆)PT ⪰αId, (3.2) where T := Tx⋆. Then for all k ≥K, the following holds. (i) Q-linear convergence: if 0 < γ ≤γk ≤¯γ < min 2αβ−2, 2β−1 , then given any 1 > ρ > eρ, ||xk+1 −x⋆|| ≤ρ||xk −x⋆||, where eρ2 = max q(γ), q(¯γ) ∈[0, 1[ and q(γ) = 1 −2αγ + β2γ2. 3 (ii) R-linear convergence: if J ∈PSAx⋆(x⋆+ T) or J ∈PSLx⋆(T), then for 0 < γ ≤ γk ≤¯γ < min 2αν−2, 2β−1 , where ν ≤β is the Lipzchitz constant of PT ∇FPT , then ||xk+1 −x⋆|| ≤ρk||xk −x⋆||, where ρ2 k = 1 −2αγk + ν2γ2 k ∈[0, 1[. Moreover, if α ν2 ≤¯γ and set γk ≡ α ν2 , then the optimal linear rate can be achieved is ρ∗= p 1 −α2/ν2. Remark 3.2. • The non-degeneracy assumption in (3.1) can be viewed as a geometric generalization of strict complementarity of non-linear programming. Building on the arguments of [16], it turns out that it is almost a necessary condition for finite identification of Mx⋆. • Under the non-degeneracy and local strong convexity assumptions (3.1)-(3.2), one can actually show that x⋆is unique by extending the reasoning in [26]. • For F = G ◦A, where G satisfies (A.2), assumption (3.2) and the constant α can be restated in terms of local strong convexity of G and restricted injectivity of A on T, i.e. Ker(A) ∩T = {0}. • When F = 1 2||y −A · ||2, not only the minimizer x⋆is unique, but also the rates in Theorem 3.1 can be refined further as the gradient operator ∇F becomes linear. • Partial smoothness guarantees that xk arrives the active manifold in finite time, hence raising the hope of acceleration using second-order information. For instance, one can think of turning to geometric methods along the manifold Mx⋆, where faster convergence rates can be achieved. This is also the motivation behind the work of e.g. [19]. When J ∈PSSx⋆(T), it turns out that the restricted convexity assumption (3.2) of Theorem 3.1 can be removed in some cases, but at the price of less sharp rates. Theorem 3.3. Assume that (A.1)-(A.3) hold. For x⋆∈Argmin Φ, suppose that J ∈PSSx⋆(Tx⋆), (3.1) is fulfilled, and there exists a subspace V such that Ker PT ∇2F(x)PT = V for any x ∈ Bϵ(x⋆), ϵ > 0. Let the FB scheme be used to create a sequence xk that converges to x⋆with 0 < γ ≤γk ≤¯γ < min 2αβ−2, 2β−1 , where α > 0 (see the proof). Then there exists a constant C > 0 and ρ ∈[0, 1[ such that for all k large enough ||xk −x⋆|| ≤Cρk. A typical example where this result applies is for F = G ◦A with G locally strongly convex, in which case V = Ker(AT ). 4 Numerical experiments In this section, we describe some examples to demonstrate the applicability of our results. More precisely, we consider solving min x∈Rn 1 2||y −Ax||2 + λJ(x) (4.1) where y ∈Rm is the observation, A : Rn →Rm, λ is the tradeoff parameter, and J is either the ℓ1-norm, the ℓ∞-norm, the ℓ1 −ℓ2-norm, the TV semi-norm or the nuclear norm. Example 4.1 (ℓ1-norm). For x ∈Rn, the sparsity promoting ℓ1-norm [8, 23] is J(x) = Pn i=1|xi|. It can verified that J is a polyhedral norm, and thus J ∈PSSx(Tx) for the model subspace M = Tx = u ∈Rn : supp(u) ⊆supp(x) , and ex = sign(x). The proximity operator of the ℓ1-norm is given by a simple soft-thresholding. 4 Example 4.2 (ℓ1−ℓ2-norm). The ℓ1−ℓ2-norm is usually used to promote group-structured sparsity [29]. Let the support of x ∈Rn be divided into non-overlapping blocks B such that S b∈B b = {1, . . . , n}. The ℓ1 −ℓ2-norm is given by J(x) = ||x||B = P b∈B||xb||, where xb = (xi)i∈b ∈R|b|. || · ||B in general is not polyhedral, yet partly smooth relative to the linear manifold M = Tx = u ∈Rn : suppB(u) ⊆suppB(x) , and ex = N(xb) b∈B, where suppB(x) = S b : xb ̸= 0 , N(x) = x/||x|| and N(0) = 0. The proximity operator of the ℓ1 −ℓ2 norm is given by a simple block soft-thresholding. Example 4.3 (Total Variation). As stated in the introduction, partial smoothness is preserved under pre-composition by a linear operator. Let J0 be a closed convex function and D is a linear operator. Popular examples are the TV semi-norm in which case J0 = || · ||1 and D∗= DDIF is a finite difference approximation of the derivative [22], or the fused Lasso for D = [DDIF, ϵId] [24]. If J0 ∈PSD∗x(M0), then it is shown in [17, Theorem 4.2] that under an appropriate transversality condition, J ∈PSx(M) where M = u ∈Rn : D∗u ∈M0 . In particular, for the case of the TV semi-norm, we have J ∈PSSx(Tx) with M = Tx = u ∈Rn : supp(D∗u) ⊆I and ex = PTxDsign(D∗x) where I = supp(D∗x). The proximity operator for the 1D TV, though not available in closed form, can be obtained efficiently using either the taut string algorithm [11] or the graph cuts [7]. Example 4.4 (Nuclear norm). Low-rank is the spectral extension of vector sparsity to matrixvalued data x ∈Rn1×n2, i.e. imposing the sparsity on the singular values of x. Let x = UΛxV ∗a reduced singular value decomposition (SVD) of x. The nuclear norm of a x is defined as J(x) = ||x||∗= Pr i=1(Λx)i, where rank(x) = r. It has been used for instance as SDP convex relaxation for many problems including in machine learning [2, 12], matrix completion [21, 5] and phase retrieval [6]. It can be shown that the nuclear norm is partly smooth relative to the manifold [18, Example 2] M = z ∈Rn1×n2 : rank(z) = r . The tangent space to M at x and ex are given by TM(x) = z ∈Rn1×n2 : z = UL∗+ MV ∗, ∀L ∈Rn2×r, M ∈Rn1×r , and ex = UV ∗. The proximity operator of the nuclear norm is just soft–thresholding applied to the singular values. Recovery from random measurements In these examples, the forward observation model is y = Ax0 + ε, ε ∼N(0, δ2), (4.2) where A ∈Rm×n is generated uniformly at random from the Gaussian ensemble with i.i.d. zeromean and unit variance entries. The tested experimental settings are (a) ℓ1-norm m = 48 and n = 128, x0 is 8-sparse; (b) Total Variation m = 48 and n = 128, (DDIFx0) is 8-sparse; (c) ℓ∞-norm m = 123 and n = 128, x0 has 10 saturating entries; (d) ℓ1 −ℓ2-norm m = 48 and n = 128, x0 has 2 non-zero blocks of size 4; (e) Nuclear norm m = 1425 and n = 2500, x0 ∈R50×50 and rank(x0) = 5. 5 The number of measurements is chosen sufficiently large, δ small enough and λ of the order of δ so that [27, Theorem 1] applies, yielding that the minimizer of (4.1) is unique and verifies the non-degeneracy and restricted strong convexity assumptions (3.1)-(3.2). The convergence profile of ||xk −x⋆|| are depicted in Figure 1(a)-(e). Only local curves after activity identification are shown. For ℓ1, TV and ℓ∞, the predicted rate coincides exactly with the observed one. This is because these regularizers are all partly polyhedral gauges, and the data fidelity is quadratic, hence making the predictions of Theorem 3.1(ii) exact. For the ℓ1 −ℓ2-norm, although its active manifold is still a subspace, the generalized sign vector ek is not locally constant, which entails that the the predicted rate of Theorem 3.1(ii) slightly overestimates the observed one. For the nuclear norm, whose active manifold is not linear. Thus Theorem 3.1(i) applies, and the observed and predicted rates are again close. TV deconvolution In this image processing example, y is a degraded image generated according to the same forward model as (4.1), but now A is a convolution with a Gaussian kernel. The anisotropic TV regularizer is used. The convergence profile is shown in Figure 1(f). Assumptions (3.1)-(3.2) are checked a posteriori. This together with the fact that the anisotropic TV is polyhedral justifies that the predicted rate is again exact. 380 400 420 440 460 480 500 10 −10 10 −8 10 −6 10 −4 10 −2 ∥x k −x ⋆∥ k theoretical practical (a) ℓ1 (Lasso) 450 500 550 600 650 700 750 800 10 −10 10 −8 10 −6 10 −4 10 −2 ∥x k −x ⋆∥ k theoretical practical (b) TV semi-norm 1000 2000 3000 4000 5000 6000 7000 8000 10 −10 10 −8 10 −6 10 −4 10 −2 10 0 ∥x k −x ⋆∥ k theoretical practical (c) ℓ∞-norm 350 400 450 500 10 −10 10 −8 10 −6 10 −4 10 −2 ∥x k −x ⋆∥ k theoretical practical (d) ℓ1 −ℓ2-norm 250 300 350 400 450 500 10 −10 10 −8 10 −6 10 −4 10 −2 10 0 ∥x k −x ⋆∥ k theoretical practical (e) Nuclear norm 50 100 150 200 250 300 10 −10 10 −8 10 −6 10 −4 10 −2 10 0 10 2 ∥x k −x ⋆∥ k theoretical practical (f) TV deconvolution Figure 1: Observed and predicted local convergence profiles of the FB method (1.2) in terms of ||xk −x⋆|| for different types of partly smooth functions. (a) ℓ1-norm; (b) TV semi-norm; (c) ℓ∞norm; (d) ℓ1 −ℓ2-norm; (e) Nuclear norm; (f) TV deconvolution. 5 Proofs Lemma 5.1. Suppose that J ∈PSx(M). Then for any x′ ∈M ∩U, where U is a neighbourhood of x, the projector PM(x′) is uniquely valued and C1 around x, and thus x′ −x = PTx(x′ −x) + o ||x′ −x|| . If J ∈PSAx(x + Tx) or J ∈PSLx(Tx), then x′ −x = PTx(x′ −x). Proof. Partial smoothness implies that M is a C2–manifold around x, then PM(x′) is uniquely valued [20] and moreover C1 near x [18, Lemma 4]. Thus, continuous differentiability shows x′ −x = PM(x′) −PM(x) = DPM(x)(x −x′) + o(||x −x′||). 6 where DPM(x) is the derivative of PM at x. By virtue of [18, Lemma 4] and the sharpness propoerty of J, this derivative is given by DPM(x) = PTM(x) = PTx, The case where M is affine or linear is immediate. This conlcudes the proof. Proof of Theorem 3.1. 1. Classical convergence results of the FB scheme, e.g. [9], show that xk converges to some x⋆∈ Argmin Φ ̸= ∅by assumption (A.3). Assumptions (A.1)-(A.2) entail that (3.1) is equivalent to 0 ∈ri ∂ Φ(x⋆) . Since F ∈C2 around x⋆, the smooth perturbation rule of partly smooth functions [17, Corollary 4.7] ensures that Φ ∈PSx⋆(M). By definition of xk+1, we have 1 γk Gk(xk) −Gk(xk+1) ∈∂Φ(xk+1). where Gk = Id −γk∇F . By Baillon-Haddad theorem, Gk is non-expansive, hence dist 0, ∂Φ(xk+1) ≤ 1 γk ||Gk(xk) −Gk(xk+1)|| ≤ 1 γk ||xk −xk+1||. Since lim inf γk = γ > 0, we obtain dist 0, ∂Φ(xk+1) →0. Owing to assumptions (A.1)(A.2), Φ is subdifferentially continuous and thus Φ(xk) →Φ(x⋆). Altogether, this shows that the conditions of [15, Theorem 5.3] are fulfilled, whence the claim follows. 2. Take K > 0 sufficiently large such that for all k ≥K, xk ∈Mx⋆and xk ∈Bϵ(x⋆). (i) Since proxγkJ is firmly non-expansive, hence non-expansive, we have ||xk+1 −x⋆|| = ||proxγkJGkxk −proxγkJGkx⋆|| ≤||Gkxk −Gkx⋆||. (5.1) By virtue of Lemma 5.1, we have xk −x⋆= PT (xk −x⋆) + o(||xk −x⋆||). This, together with local C2 smoothness of F and Lipschitz continuity of ∇F entails ⟨xk −x⋆, ∇F(xk) −∇F(x⋆)⟩= R 1 0⟨xk −x⋆, ∇2F(x⋆+ t(xk −x⋆))(xk −x⋆)⟩dt = R 1 0 ⟨PT (xk −x⋆), ∇2F(x⋆+ t(xk −x⋆))PT (xk −x⋆)⟩dt + o ||xk −x⋆||2 ≥α||xk −x⋆||2 + o ||xk −x⋆||2 . (5.2) Since (3.2) holds and ∇2F(x) depends continuously on x, there exists ϵ > 0 such that PT ∇2F(x)PT ≻αId, ∀x ∈Bϵ(x⋆). Thus, classical development of the right hand side of (5.1) yields ||xk+1 −x⋆||2 ≤||Gkxk −Gkx⋆||2 = ||(xk −x⋆) −γk(∇F(xk) −∇F(x⋆))||2 = ||xk −x⋆||2 −2γk⟨xk −x⋆, ∇F(xk) −∇F(x⋆)⟩+ γ2 k||∇F(xk) −∇F(x⋆)||2 ≤||xk −x⋆||2 −2γkα||xk −x⋆||2 + γ2 kβ2||xk −x⋆||2 + o ||xk −x⋆||2 = 1 −2αγk + β2γ2 k ||xk −x⋆||2 + o ||xk −x⋆||2 . (5.3) Taking the lim sup in this inequality gives lim sup k→+∞ ||xk+1 −x⋆||2/||xk −x⋆||2 ≤q(γk) = 1 −2αγk + β2γ2 k. (5.4) It is clear that for 0 < γ ≤γ ≤¯γ < min 2αβ−2, 2β−1 , q(γ) ∈[0, 1[, and q(γ) ≤eρ2 = max q(γ), q(¯γ) . Inserting this in (5.4), and using classical arguments yields the result. (ii) We give the proof for M = T, that for M = x⋆+ T is similar. Since xk and x⋆belong to T, from xk+1 = proxγkJ(Gkxk) we have Gkxk −xk+1 ∈γk∂J(xk+1) ⇒xk+1 = PT Gkxk −γk∂J(xk+1) = PT Gkxk −γkek+1. Similarily, we have x⋆= PT Gkx⋆−γke⋆. We then arrive at (xk+1 −x⋆) + γk(ek+1 −e⋆) = (xk −x⋆) −γk PT ∇F(PT xk) −PT ∇F(PT x⋆) . (5.5) 7 Moreover, maximal monotonicity of γk∂J gives ||(xk+1 −x⋆) + γk(ek+1 −e⋆)||2 = ||xk+1 −x⋆||2 + 2⟨xk+1 −x⋆, γk(ek+1 −e⋆)⟩+ γk||ek+1 −e⋆||2 ≥||xk+1 −x⋆||2. It is straightforward to see that now, (5.2) becomes ⟨xk −x⋆, PT ∇F(PT xk) −PT ∇F(PT x⋆)⟩≥α||xk −x⋆||2. Let ν be the Lipschitz constant of PT ∇FPT . Obviously ν ≤β. Developing ||PT (Gkxk − Gkx⋆)||2 similarly to (5.3) we obtain ||xk+1 −x⋆||2 ≤ 1 −2αγk + ν2γ2 k ||xk −x⋆||2 = ρ2 k||xk −x⋆||2, where ρk ∈[0, 1[ for 0 < γ ≤γk ≤¯γ < min 2α/ν2, 2/β . ρk is minimized at α ν2 with the proposed optimal rate whenever it obeys the given upper-bound. Proof of Theorem 3.3. Arguing similarly to the proof of Theorem 3.1(ii), and using in addition that e⋆= ex⋆is locally constant, we get xk+1 −x⋆= (xk −x⋆) −γk PT ∇F(PT xk) −PT ∇F(PT x⋆) = (xk −x⋆) −γk R 1 0 PT ∇2F(x⋆+ t(xk −x⋆))PT (xk −x⋆)dt, Denote Ht = PT ∇2F(x⋆+ t(xk −x⋆))PT ⪰0. Using that Ht is self-adjoint, we have PV xk+1 = PV xk. Since xk →x⋆, it follows that PV xk = PV x⋆for all k sufficiently large. Observing that xk −x⋆= PV ⊥(xk −x⋆) for all large k, we get xk+1 −x⋆= xk −x⋆−γk R 1 0 PV ⊥HtPV ⊥(xk −x⋆)dt. Observe that V ⊥⊂T. By definition, Bt = H1/2 t PV ⊥is injective, and therefore, ∃σ > 0 such that ||Btx||2 > σ||x||2 for all x ̸= 0 and t ∈[0, 1]. We then have ||xk+1 −x⋆||2 = ||xk −x⋆||2 −2γk R 1 0 ⟨xk −x⋆, BT t Bt(xk −x⋆)⟩dt + γ2 k||PV ⊥PT ∇F(xk) −∇F(x⋆) ||2 = ||xk −x⋆||2 −2γk R 1 0 ||Bt(xk −x⋆)||2dt + γ2 k||PV ⊥PT ||2||∇F(xk) −∇F(x⋆)||2 = ||xk −x⋆||2 −2γkσ||xk −x⋆||2 + γ2 k||PT PV ⊥||2||∇F(xk) −∇F(x⋆)||2 ≤||xk −x⋆||2 −2γkσ||xk −x⋆||2 + γ2 kβ2||PV ⊥||2||PV ⊥(xk −x⋆)||2 ≤||xk −x⋆||2 −2γkσ||xk −x⋆||2 + γ2 kβ2||xk −x⋆||2 = ρ2 k||xk −x⋆||2. It is easy to see again that ρk ∈[0, 1[ whenever 0 < γ ≤γk ≤¯γ < min 2β−1, 2σβ−2 . References [1] A. Agarwal, S. Negahban, and M. Wainwright. Fast global convergence of gradient methods for highdimensional statistical recovery. The Annals of Statistics, 40(5):2452–2482, 10 2012. [2] F. Bach. Consistency of trace norm minimization. The Journal of Machine Learning Research, 9:1019– 1048, 2008. [3] H. H. Bauschke and P. L. Combettes. Convex analysis and monotone operator theory in Hilbert spaces. Springer, 2011. [4] K. Bredies and D. A. Lorenz. Linear convergence of iterative soft-thresholding. Journal of Fourier Analysis and Applications, 14(5-6):813–837, 2008. [5] E. J. Candès and B. Recht. Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 9(6):717–772, 2009. 8 [6] E. J. Candès, T. Strohmer, and V. Voroninski. Phaselift: Exact and stable signal recovery from magnitude measurements via convex programming. Communications on Pure and Applied Mathematics, 66(8):1241–1274, 2013. [7] A. Chambolle and J. Darbon. A parametric maximum flow approach for discrete total variation regularization. In Image Processing and Analysis with Graphs. CRC Press, 2012. [8] S. S. Chen, D. L. Donoho, and M. A. Saunders. Atomic decomposition by basis pursuit. SIAM journal on scientific computing, 20(1):33–61, 1999. [9] P. L. Combettes and V. R. Wajs. Signal recovery by proximal Forward–Backward splitting. Multiscale Modeling & Simulation, 4(4):1168–1200, 2005. [10] A. Daniilidis, D. Drusvyatskiy, and A. S. Lewis. Orthogonal invariance and identifiability. to appear in SIAM J. Matrix Anal. Appl., 2014. [11] P. L. Davies and A. Kovac. Local extremes, runs, strings and multiresolution. Ann. Statist., 29:1–65, 2001. [12] E. Grave, G. Obozinski, and F. Bach. Trace Lasso: a trace norm regularization for correlated designs. arXiv preprint arXiv:1109.1990, 2011. [13] E. Hale, W. Yin, and Y. Zhang. Fixed-point continuation for ℓ1-minimization: Methodology and convergence. SIAM Journal on Optimization, 19(3):1107–1130, 2008. [14] W. L. Hare. Identifying active manifolds in regularization problems. In H. H. Bauschke, R. S., Burachik, P. L. Combettes, V. Elser, D. R. Luke, and H. Wolkowicz, editors, Fixed-Point Algorithms for Inverse Problems in Science and Engineering, volume 49 of Springer Optimization and Its Applications, chapter 13. Springer, 2011. [15] W. L. Hare and A. S. Lewis. Identifying active constraints via partial smoothness and prox-regularity. Journal of Convex Analysis, 11(2):251–266, 2004. [16] W. L. Hare and A. S. Lewis. Identifying active manifolds. Algorithmic Operations Research, 2(2):75–82, 2007. [17] A. S. Lewis. Active sets, nonsmoothness, and sensitivity. SIAM Journal on Optimization, 13(3):702–725, 2003. [18] A. S. Lewis and J. Malick. Alternating projections on manifolds. Mathematics of Operations Research, 33(1):216–234, 2008. [19] S. A. Miller and J. Malick. Newton methods for nonsmooth convex minimization: connections amongLagrangian, Riemannian newton and SQP methods. Mathematical programming, 104(2-3):609–633, 2005. [20] R. A. Poliquin, R. T. Rockafellar, and L. Thibault. Local differentiability of distance functions. Trans. Amer. Math. Soc., 352:5231–5249, 2000. [21] B. Recht, M. Fazel, and P. A. Parrilo. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM review, 52(3):471–501, 2010. [22] L. I. Rudin, S. Osher, and E. Fatemi. Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena, 60(1):259–268, 1992. [23] R. Tibshirani. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society. Series B. Methodological, 58(1):267–288, 1996. [24] R. Tibshirani, M. Saunders, S. Rosset, J. Zhu, and K. Knight. Sparsity and smoothness via the fused Lasso. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(1):91–108, 2004. [25] P. Tseng and S. Yun. A coordinate gradient descent method for nonsmooth separable minimization. Math. Prog. (Ser. B), 117, 2009. [26] S. Vaiter, M. Golbabaee, M. J. Fadili, and G. Peyré. Model selection with low complexity priors. Available at arXiv:1304.6033, 2013. [27] S. Vaiter, G. Peyré, and M. J. Fadili. Model consistency of partly smooth regularizers. Available arXiv:1405.1004, 2014. [28] S. J. Wright. Identifiable surfaces in constrained optimization. SIAM Journal on Control and Optimization, 31(4):1063–1079, 1993. [29] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68(1):49–67, 2005. 9
|
2014
|
123
|
5,207
|
Improved Distributed Principal Component Analysis Maria-Florina Balcan School of Computer Science Carnegie Mellon University ninamf@cs.cmu.edu Vandana Kanchanapally School of Computer Science Georgia Institute of Technology vvandana@gatech.edu Yingyu Liang Department of Computer Science Princeton University yingyul@cs.princeton.edu David Woodruff Almaden Research Center IBM Research dpwoodru@us.ibm.com Abstract We study the distributed computing setting in which there are multiple servers, each holding a set of points, who wish to compute functions on the union of their point sets. A key task in this setting is Principal Component Analysis (PCA), in which the servers would like to compute a low dimensional subspace capturing as much of the variance of the union of their point sets as possible. Given a procedure for approximate PCA, one can use it to approximately solve problems such as k-means clustering and low rank approximation. The essential properties of an approximate distributed PCA algorithm are its communication cost and computational efficiency for a given desired accuracy in downstream applications. We give new algorithms and analyses for distributed PCA which lead to improved communication and computational costs for k-means clustering and related problems. Our empirical study on real world data shows a speedup of orders of magnitude, preserving communication with only a negligible degradation in solution quality. Some of these techniques we develop, such as a general transformation from a constant success probability subspace embedding to a high success probability subspace embedding with a dimension and sparsity independent of the success probability, may be of independent interest. 1 Introduction Since data is often partitioned across multiple servers [20, 7, 18], there is an increased interest in computing on it in the distributed model. A basic tool for distributed data analysis is Principal Component Analysis (PCA). The goal of PCA is to find an r-dimensional (affine) subspace that captures as much of the variance of the data as possible. Hence, it can reveal low-dimensional structure in very high dimensional data. Moreover, it can serve as a preprocessing step to reduce the data dimension in various machine learning tasks, such as Non-Negative Matrix Factorization (NNMF) [15] and Latent Dirichlet Allocation (LDA) [3]. In the distributed model, approximate PCA was used by Feldman et al. [9] for solving a number of shape fitting problems such as k-means clustering, where the approximation is in the form of a coreset, and has the property that local coresets can be easily combined across servers into a global coreset, thereby providing an approximate PCA to the union of the data sets. Designing small coresets therefore leads to communication-efficient protocols. Coresets have the nice property that their size typically does not depend on the number n of points being approximated. A beautiful property of the coresets developed in [9] is that for approximate PCA their size also only depends linearly on the dimension d, whereas previous coresets depended quadratically on d [8]. This gives the best known communication protocols for approximate PCA and k-means clustering. 1 Despite this recent exciting progress, several important questions remain. First, can we improve the communication further as a function of the number of servers, the approximation error, and other parameters of the downstream applications (such as the number k of clusters in k-means clustering)? Second, while preserving optimal or nearly-optimal communication, can we improve the computational costs of the protocols? We note that in the protocols of Feldman et al. each server has to run a singular value decomposition (SVD) on her local data set, while additional work needs to be performed to combine the outputs of each server into a global approximate PCA. Third, are these algorithms practical and do they scale well with large-scale datasets? In this paper we give answers to the above questions. To state our results more precisely, we first define the model and the problems. Communication Model. In the distributed setting, we consider a set of s nodes V = {vi, 1 ≤i ≤ s}, each of which can communicate with a central coordinator v0. On each node vi, there is a local data matrix Pi ∈Rni×d having ni data points in d dimension (ni > d). The global data P ∈Rn×d is then a concatenation of the local data matrix, i.e. P⊤= P⊤ 1 , P⊤ 2 , . . . , P⊤ s and n = Ps i=1 ni. Let pi denote the i-th row of P. Throughout the paper, we assume that the data points are centered to have zero mean, i.e., Pn i=1 pi = 0. Uncentered data requires a rank-one modification to the algorithms, whose communication and computation costs are dominated by those in the other steps. Approximate PCA and ℓ2-Error Fitting. For a matrix A = [aij], let ∥A∥2 F = P i,j a2 ij be its Frobenius norm, and let σi(A) be the i-th singular value of A. Let A(t) denote the matrix that contains the first t columns of A. Let LX denote the linear subspace spanned by the columns of X. For a point p, let πL(p) be its projection onto subspace L and let πX(p) be shorthand for πLX(p). For a point p ∈Rd and a subspace L ⊆Rd, we denote the squared distance between p and L by d2(p, L) := min q∈L ∥p −q∥2 2 = ∥p −πL(p)∥2 2. Definition 1. The linear (or affine) r-Subspace k-Clustering on P ∈Rn×d is min L d2(P, L) := n X i=1 min L∈L d2(pi, L) (1) where P is an n × d matrix whose rows are p1, . . . , pn, and L = {Lj}k j=1 is a set of k centers, each of which is an r-dimensional linear (or affine) subspace. PCA is a special case when k = 1 and the center is an r-dimensional subspace. This optimal rdimensional subspace is spanned by the top r right singular vectors of P, also known as the principal components, and can be found using the singular value decomposition (SVD). Another special case of the above is k-means clustering when the centers are points (r = 0). Constrained versions of this problem include NNMF where the r-dimensional subspace should be spanned by positive vectors, and LDA which assumes a prior distribution defining a probability for each r-dimensional subspace. We will primarily be concerned with relative-error approximation algorithms, for which we would like to output a set L′ of k centers for which d2(P, L′) ≤(1 + ϵ) minL d2(P, L). For approximate distributed PCA, the following protocol is implicit in [9]: each server i computes its top O(r/ϵ) principal components Yi of Pi and sends them to the coordinator. The coordinator stacks the O(r/ϵ) × d matrices Yi on top of each other, forming an O(sr/ϵ) × d matrix Y, and computes the top r principal components of Y, and returns these to the servers. This provides a relative-error approximation to the PCA problem. We refer to this algorithm as Algorithm disPCA. Our Contributions. Our results are summarized as follows. Improved Communication: We improve the communication cost for using distributed PCA for kmeans clustering and similar ℓ2-fitting problems. The best previous approach is to use Corollary 4.5 in [9], which shows that given a data matrix P, if we project the rows onto the space spanned by the top O(k/ϵ2) principal components, and solve the k-means problem in this subspace, we obtain a (1+ϵ)-approximation. In the distributed setting, this would require first running Algorithm disPCA with parameter r = O(k/ϵ2), and thus communication at least O(skd/ϵ3) to compute the O(k/ϵ2) global principal components. Then one can solve a distributed k-means problem in this subspace, and an α-approximation in it translates to an overall α(1 + ϵ) approximation. Our Theorem 3 shows that it suffices to run Algorithm disPCA while only incurring O(skd/ϵ2) communication to compute the O(k/ϵ2) global principal components, preserving the k-means solution cost up to a (1 + ϵ)-factor. Our communication is thus a 1/ϵ factor better, and illustrates that 2 for downstream applications it is sometimes important to “open up the box” rather than to directly use the guarantees of a generic PCA algorithm (which would give O(skd/ϵ3) communication). One feature of this approach is that by using the distributed k-means algorithm in [2] on the projected data, the coordinator can sample points from the servers proportional to their local k-means cost solutions, which reduces the communication roughly by a factor of s, which would come from each server sending their local k-means coreset to the coordinator. Furthermore, before applying the above approach, one can first run any other dimension reduction to dimension d′ so that the k-means cost is preserved up to certain accuracy. For example, if we want a 1+ϵ approximation factor, we can set d′ = O(log n/ϵ2) by a Johnson-Lindenstrauss transform; if we want a larger 2+ϵ approximation factor, we can set d′ = O(k/ϵ2) using [4]. In this way the parameter d in the above communication cost bound can be replaced by d′. Note that unlike these dimension reductions, our algorithm for projecting onto principal components is deterministic and does not incur error probability. Improved Computation: We turn to the computational cost of Algorithm disPCA, which to the best of our knowledge has not been addressed. A major bottleneck is that each player is computing a singular value decomposition (SVD) of its point set Pi, which takes min(nid2, n2 i d) time. We change Algorithm disPCA to instead have each server first sample an oblivious subspace embedding (OSE) [22, 5, 19, 17] matrix Hi, and instead run the algorithm on the point set defined by the rows of HiPi. Using known OSEs, one can choose Hi to have only a single non-zero entry per column and thus HiPi can be computed in nnz(Pi) time. Moreover, the number of rows of Hi is O(d2/ϵ2), which may be significantly less than the original ni number of rows. This number of rows can be further reducted to O(d logO(1) d/ϵ2) if one is willing to spend O(nnz(Pi) logO(1) d/ϵ) time [19]. We note that the number of non-zero entries of HiPi is no more than that of Pi. One technical issue is that each of s servers is locally performing a subspace embedding, which succeeds with only constant probability. If we want a single non-zero entry per column of Hi, to achieve success probability 1 −O(1/s) so that we can union bound over all s servers succeeding, we naively would need to increase the number of rows of Hi by a factor linear in s. We give a general technique, which takes a subspace embedding that succeeds with constant probability as a black box, and show how to perform a procedure which applies it O(log 1/δ) times independently and from these applications finds one which is guaranteed to succeed with probability 1 −δ. Thus, in this setting the players can compute a subspace embedding of their data in nnz(Pi) time, for which the number of non-zero entries of HiPi is no larger than that of Pi, and without incurring this additional factor of s. This may be of independent interest. It may still be expensive to perform the SVD of HiPi and for the coordinator to perform an SVD on Y in Algorithm disPCA. We therefore replace the SVD computation with a randomized approximate SVD computation with spectral norm error. Our contribution here is to analyze the error in distributed PCA and k-means after performing these speedups. Empirical Results: Our speedups result in significant computational savings. The randomized techniques we use reduce the time by orders of magnitude on medium and large-scal data sets, while preserving the communication cost. Although the theory predicts a new small additive error because of our speedups, in our experiments the solution quality was only negligibly affected. Related Work A number of algorithms for approximate distributed PCA have been proposed [21, 14, 16, 9], but either without theoretical guarantees, or without considering communication. Most closely related to our work is [9, 12]. [9] observes the top singular vectors of the local data is its summary and the union of these summaries is a summary of the global data, i.e., Algorithm disPCA. [12] studies algorithms in the arbitrary partition model in which each server holds a matrix Pi and P = Ps i=1 Pi. More details and more related work can be found in the appendix. 2 Tradeoff between Communication and Solution Quality Algorithm disPCA for distributed PCA is suggested in [21, 9], which consists of a local stage and a global stage. In the local stage, each node performs SVD on its local data matrix, and communicates the first t1 singular values Σi (t1) and the first t1 right singular vectors Vi (t1) to the central coordinator. Then in the global stage, the coordinator concatenates Σi (t1)(Vi (t1))⊤to form a matrix Y, and performs SVD on it to get the first t2 right singular vectors. To get some intuition, consider the easy case when the data points actually lie in an r-dimensional subspace. We can run Algorithm disPCA with t1 = t2 = r. Since Pi has rank r, its projection to 3 P = P1 ... Ps Local PCA −−−−−→ ... Local PCA −−−−−→ Σ(t1) 1 V(t1) 1 ⊤ ... Σ(t1) s V(t1) s ⊤ = Y1 ... Ys = Y Global PCA −−−−−−→V(t2) Figure 1: The key points of the algorithm disPCA. the subspace spanned by its first t1 = r right singular vectors, bPi = UiΣi (r)(Vi (r))⊤, is identical to Pi. Then we only need to do PCA on bP, the concatenation of bPi. Observing that bP = eUY where eU is orthonormal, it suffices to compute SVD on Y, and only Σi (r)Vi (r) needs to be communicated. In the general case when the data may have rank higher than r, it turns out that one needs to set t1 sufficiently large, so that bPi approximates Pi well enough and does not introduce too much error into the final solution. In particular, the following close projection property about SVD is useful: Lemma 1. Suppose A has SVD A = UΣV and let bA = AV(t)(V(t))⊤denote its SVD truncation. If t = O(r/ϵ), then for any d × r matrix X with orthonormal columns, 0 ≤∥AX −bAX∥2 F ≤ϵd2(A, LX), and 0 ≤∥AX∥2 F −∥bAX∥2 F ≤ϵd2(A, LX). This means that the projections of bA and A on any r-dimensional subspace are close, when the projected dimension t is sufficiently large compared to r. Now, note that the difference between ∥P −PXX⊤∥2 F and ∥bP −bPXX⊤∥2 F is only related to ∥PX∥2 F −∥bPX∥2 F = P i[∥PiX∥2 F − ∥bPiX∥2 F ]. Each term in which is bounded by the lemma. So we can use bP as a proxy for P in the PCA task. Again, computing PCA on bP is equivalent to computing SVD on Y, as done in Algorithm disPCA. These lead to the following theorem, which is implicit in [9], stating that the algorithm can produce a (1 + ϵ)-approximation for the distributed PCA problem. Theorem 2. Suppose Algorithm disPCA takes parameters t1 ≥r + ⌈4r/ϵ⌉−1 and t2 = r. Then ∥P −PV(r)(V(r))⊤∥2 F ≤(1 + ϵ) min X ∥P −PXX⊤∥2 F where the minimization is over d×r orthonormal matrices X. The communication is O( srd ϵ ) words. 2.1 Guarantees for Distributed ℓ2-Error Fitting Algorithm disPCA can also be used as a pre-processing step for applications such as ℓ2-error fitting. In this section, we prove the correctness of Algorithm disPCA as pre-processing for these applications. In particular, we show that by setting t1, t2 sufficiently large, the objective value of any solution merely changes when the original data P is replaced the projected data ˜P = PV(t2)(V(t2))⊤. Therefore, the projected data serves as a proxy of the original data, i.e., any distributed algorithm can be applied on the projected data to get a solution on the original data. As the dimension is lower, the communication cost is reduced. Formally, Theorem 3. Let t1 = t2 = O(rk/ϵ2) in Algorithm disPCA for ϵ ∈(0, 1/3). Then there exists a constant c0 ≥0 such that for any set of k centers L in r-Subspace k-Clustering, (1 −ϵ)d2(P, L) ≤d2(˜P, L) + c0 ≤(1 + ϵ)d2(P, L). The theorem implies that any α-approximate solution L on the projected data ˜P is a (1 + 3ϵ)αapproximation on the original data P. To see this, let L∗denote the optimal solution. Then (1 −ϵ)d2(P, L) ≤d2(˜P, L) + c0 ≤αd2(˜P, L∗) + c0 ≤α(1 + ϵ)d2(P, L∗) which leads to d2(P, L) ≤(1 + 3ϵ)αd2(P, L∗). In other words, the distributed PCA step only introduces a small multiplicative approximation factor of (1 + 3ϵ). The key to prove the theorem is the close projection property of the algorithm (Lemma 4): for any low dimensional subspace spanned by X, the projections of P and ˜P on the subspace are close. In 4 Algorithm 1 Distributed k-means clustering Input: {Pi}s i=1, k ∈N+ and ϵ ∈(0, 1/2), a non-distributed α-approximation algorithm Aα 1: Run Algorithm disPCA with t1 = t2 = O(k/ϵ2) to get V, and send V to all nodes. 2: Run the distributed k-means clustering algorithm in [2] on {PiVV⊤}s i=1, using Aα as a subroutine, to get k centers L. Output: L. particular, we choose X to be the orthonormal basis of the subspace spanning the centers. Then the difference between the objective values of P and ˜P can be decomposed into two terms depending only on ∥PX−˜PX∥2 F and ∥PX∥2 F −∥˜PX∥2 F respectively, which are small as shown by the lemma. The complete proof of Theorem 3 is provided in the appendix. Lemma 4. Let t1 = t2 = O(k/ϵ) in Algorithm disPCA. Then for any d×k matrix X with orthonormal columns, 0 ≤∥PX −˜PX∥2 F ≤ϵd2(P, LX), and 0 ≤∥PX∥2 F −∥˜PX∥2 F ≤ϵd2(P, LX). Proof Sketch: We first introduce some auxiliary variables for the analysis, which act as intermediate connections between P and ˜P. Imagine we perform two kinds of projections: first project Pi to bPi = PiVi (t1)(Vi (t1))⊤, then project bPi to Pi = bPiV(t2)(V(t2))⊤. Let bP denote the vertical concatenation of bPi and let P denote the vertical concatenation of Pi. These variables are designed so that the difference between P and bP and that between bP and P are easily bounded. Our proof then proceeds by first bounding these differences, and then bounding that between P and ˜P. In the following we sketch the proof for the second statement, while the other statement can be proved by a similar argument. See the appendix for details. ∥PX∥2 F −∥˜PX∥2 F = h ∥PX∥2 F −∥bPX∥2 F i + h ∥bPX∥2 F −∥PX∥2 F i + h ∥PX∥2 F −∥˜PX∥2 F i . The first term is just Ps i=1 h ∥PiX∥2 F −∥bPiX∥2 F i , each of which can be bounded by Lemma 1, since bPi is the SVD truncation of P. The second term can be bounded similarly. The more difficult part is the third term. Note that Pi = bPiZ, ˜Pi = PiZ where Z := V(t2)(V(t2))⊤X, leading to ∥PX∥2 F −∥˜PX∥2 F = Ps i=1 h ∥bPiZ∥2 F −∥PiZ∥2 F i . Although Z is not orthonormal as required by Lemma 1, we prove a generalization (Lemma 7 in the appendix) which can be applied to show that the third term is indeed small. Application to k-Means Clustering To see the implication, consider the k-means clustering problem. We can first perform any other possible dimension reduction to dimension d′ so that the kmeans cost is preserved up to accuracy ϵ, and then run Algorithm disPCA and finally run any distributed k-means clustering algorithm on the projected data to get a good approximate solution. For example, in the first step we can set d′ = O(log n/ϵ2) using a Johnson-Lindenstrauss transform, or we can perform no reduction and simply use the original data. As a concrete example, we can use original data (d′ = d), then run Algorithm disPCA, and finally run the distributed clustering algorithm in [2] which uses any non-distributed α-approximation algorithm as a subroutine and computes a (1 + ϵ)α-approximate solution. The resulting algorithm is presented in Algorithm 1. Theorem 5. With probability at least 1 −δ, Algorithm 1 outputs a (1 + ϵ)2α-approximate solution for distributed k-means clustering. The total communication cost of Algorithm 1 is O( sk ϵ2 ) vectors in Rd plus O 1 ϵ4 ( k2 ϵ2 + log 1 δ ) + sk log sk δ vectors in RO(k/ϵ2). 3 Fast Distributed PCA Subspace Embeddings One can significantly improve the time of the distributed PCA algorithms by using subspace embeddings, while keeping similar guarantees as in Lemma 4, which suffice for l2-error fitting. More precisely, a subspace embedding matrix H ∈Rℓ×n for a matrix A ∈Rn×d has the property that for all vectors y ∈Rd, ∥HAy∥2 = (1 ± ϵ)∥Ay∥2. Suppose independently, 5 each node vi chooses a random subspace embedding matrix Hi for its local data Pi. Then, they run Algorithm disPCA on the embedded data {HiPi}s i=1 instead of on the original data {Pi}s i=1. The work of [22] pioneered subspace embeddings. The recent fast sparse subspace embeddings [5] and its optimizations [17, 19] are particularly suitable for large scale sparse data sets, since their running time is linear in the number of non-zero entries in the data matrix, and they also preserve the sparsity of the data. The algorithm takes as input an n×d matrix A and a parameter ℓ, and outputs an ℓ×d embedded matrix A′ = HA (the embedded matrix H does need to be built explicitly). The embedded matrix is constructed as follows: initialize A′ = 0; for each row in A, multiply it by +1 or −1 with equal probability, then add it to a row in A′ chosen uniformly at random. The success probability is constant, while we need to set it to be 1 −δ where δ = Θ(1/s). Known results which preserve the number of non-zero entries of H to be 1 per column increase the dimension of H by a factor of s. To avoid this, we propose an approach to boost the success probability by computing O(log 1 δ ) independent embeddings, each with only constant success probability, and then run a cross validation style procedure to find one which succeeds with probability 1 −δ. More precisely, we compute the SVD of all embedded matrices HjA = UjΣjV⊤ j , and find a j ∈[r] such that for at least half of the indices j′ ̸= j, all singular values of ΣjV⊤ j Vj′Σ⊤ j′ are in [1±O(ϵ)] (see Algorithm 4 in the appendix). The reason why such an embedding HjA succeeds with high probability is as follows. Any two successful embeddings HjA and Hj′A, by definition, satisfy that ∥HjAx∥2 2 = (1 ± O(ϵ))∥Hj′Ax∥2 2 for all x, which we show is equivalent to passing the test on the singular values. Since with probability at least 1 −δ, 9/10 fraction of the embeddings are successful, it follows that the one we choose is successful with probability 1 −δ. Randomized SVD The exact SVD of an n × d matrix is impractical in the case when n or d is large. Here we show that the randomized SVD algorithm from [11] can be applied to speed up the computation without compromising the quality of the solution much. We need to use their specific form of randomized SVD since the error is with respect to the spectral norm, rather than the Frobenius norm, and so can be much smaller as needed by our applications. The algorithm first probes the row space of the ℓ× d input matrix A with an ℓ× 2t random matrix Ωand orthogonalizes the image of Ωto get a basis Q (i.e., QR-factorize A⊤Ω); projects the data to this basis and computes the SVD factorization on the smaller matrix AQ. It also performs q power iterations to push the basis towards the top t singular vectors. Fast Distributed PCA for l2-Error Fitting We modify Algorithm disPCA by first having each node do a subspace embedding locally, then replace each SVD invocation with a randomized SVD invocation. We thus arrive at Algorithm 2. For ℓ2-error fitting problems, by combining approximation guarantees of the randomized techniques with that of distributed PCA, we are able to prove: Theorem 6. Suppose Algorithm 2 takes ϵ ∈(0, 1/2], t1 = t2 = O(max k ϵ2 , log s δ ), ℓ= O( d2 ϵ2 ), q = O(max{log d ϵ , log sk ϵ }) as input, and sets the failure probability of each local subspace embedding to δ′ = δ/2s. Let ˜P = PVV⊤. Then with probability at least 1 −δ, there exists a constant c0 ≥0, such that for any set of k points L, (1 −ϵ)d2(P, L) −ϵ∥PX∥2 F ≤d2(˜P, L) + c0 ≤(1 + ϵ)d2(P, L) + ϵ∥PX∥2 F where X is an orthonormal matrix whose columns span L. The total communication is O(skd/ϵ2) and the total time is O nnz(P) + s h d3k ϵ4 + k2d2 ϵ6 i log d ϵ log sk δϵ . Proof Sketch: It suffices to show that ˜P enjoys the close projection property as in Lemma 4, i.e., ∥PX −˜PX∥2 F ≈0 and ∥PX∥2 F −∥˜PX∥2 F ≈0 for any orthonormal matrix whose columns span a low dimensional subspace. Note that Algorithm 2 is just running Algorithm disPCA (with randomized SVD) on TP where T = diag(H1, H2, . . . , Hs), so we first show that T˜P enjoys this property. But now exact SVD is replaced with randomized SVD, for which we need to use the spectral error bound to argue that the error introduced is small. More precisely, for a matrix A and its SVD truncation bA computed by randomized SVD, it is guaranteed that the spectral norm of A −bA is small, then ∥(A −bA)X∥F is small for any X with small Frobenius norm, in particular, the orthonormal basis spanning a low dimensional subspace. This then suffices to guarantee T˜P enjoys the close projection property. Given this, it suffices to show that ˜P enjoys this property as T˜P, which follows from the definition of a subspace embedding. 6 Algorithm 2 Fast Distributed PCA for l2-Error Fitting Input: {Pi}s i=1; parameters t1, t2 for Algorithm disPCA; ℓ, q for randomized techniques. 1: for each node vi ∈V do 2: Compute subspace embedding P′ i = HiPi. 3: end for 4: Run Algorithm disPCA on {P′ i}s i=1 to get V, where the SVD is randomized. Output: V. 4 Experiments Our focus is to show the randomized techniques used in Algorithm 2 reduce the time taken significantly without compromising the quality of the solution. We perform experiments for three tasks: rank-r approximation, k-means clustering and principal component regression (PCR). Datasets We choose the following real world datasets from UCI repository [1] for our experiments. For low rank approximation and k-means clustering, we choose two medium size datasets NewsGroups (18774 × 61188) and MNIST (70000 × 784), and two large-scale Bag-of-Words datasets: NYTimes news articles (BOWnytimes) (300000 × 102660) and PubMed abstracts (BOWpubmed) (8200000 × 141043). We use r = 10 for rank-r approximation and k = 10 for k-means clustering. For PCR, we use MNIST and further choose YearPredictionMSD (515345 × 90), CTslices (53500 × 386), and a large dataset MNIST8m (800000 × 784). Experimental Methodology The algorithms are evaluated on a star network. The number of nodes is s = 25 for medium-size datasets, and s = 100 for the larger ones. We distribute the data over the nodes using a weighted partition, where each point is distributed to the nodes with probability proportional to the node’s weight chosen from the power law with parameter α = 2. For each projection dimension, we first construct the projected data using distributed PCA. For low rank approximation, we report the ratio between the cost of the obtained solution to that of the solution computed by SVD on the global data. For k-means, we run the algorithm in [2] (with Lloyd’s method as a subroutine) on the projected data to get a solution. Then we report the ratio between the cost of the above solution to that of a solution obtained by running Lloyd’s method directly on the global data. For PCR, we perform regression on the projected data to get a solution. Then we report the ratio between the error of the above solution to that of a solution obtained by PCR directly on the global data. We stop the algorihtm if it takes more than 24 hours. For each projection dimension and each algorithm with randomness, the average ratio over 5 runs is reported. Results Figure 2 shows the results for low rank approximation. We observe that the error of the fast distributed PCA is comparable to that of the exact solution computed directly on the global data. This is also observed for distributed PCA with one or none of subspace embedding and randomized SVD. Furthermore, the error of the fast PCA is comparable to that of normal PCA, which means that the speedup techniques merely affects the accuracy of the solution. The second row shows the computational time, which suggests a significant decrease in the time taken to run the fast distributed PCA. For example, on NewsGroups, the time of the fast distributed PCA improves over that of normal distributed PCA by a factor between 10 to 100. On the large dataset BOWpubmed, the normal PCA takes too long to finish and no results are presented, while the speedup versions produce good results in reasonable time. The use of the randomized techniques gives us a good performance improvement while keeping the solution quality almost the same. Figure 3 and Figure 4 show the results for k-means clustering and PCR respectively. Similar to that for low rank approximation, we observe that the distributed solutions are almost as good as that computed directly on the global data, and the speedup merely affects the solution quality. We again observe a huge decrease in the running time by the speedup techniques. Acknowledgments This work was supported in part by NSF grants CCF-0953192, CCF-1451177, CCF-1101283, and CCF-1422910, ONR grant N00014-09-1-0751, and AFOSR grant FA9550-091-0538. David Woodruff would like to acknowledge the XDATA program of the Defense Advanced Research Projects Agency (DARPA), administered through Air Force Research Laboratory contract FA8750-12-C0323, for supporting this work. 7 5 10 15 20 25 1 1.02 1.04 1.06 1.08 1.12 1.14 Fast_PCA Only_Subspace Only_Randomized Normal_PCA (a) NewsGroups 14 24 34 44 54 1 1.04 1.08 1.12 1.16 1.2 (b) MNIST 10 15 20 25 30 1.01 1.02 1.03 1.04 1.05 1.06 1.07 1.08 (c) BOWnytimes 10 15 20 25 30 1 1.02 1.04 1.06 1.08 1.1 1.12 1.14 (d) BOWpubmed 5 10 15 20 25 10 1 10 2 10 3 10 4 Fast_PCA Only_Subspace Only_Randomized Normal_PCA (e) NewsGroups 14 24 34 44 54 10 0 10 1 10 2 10 3 (f) MNIST 10 15 20 25 30 10 3 10 4 10 5 (g) BOWnytimes 10 15 20 25 30 10 4.7 10 4.8 10 4.9 (h) BOWpubmed Figure 2: Low rank approximation. First row: error (normalized by baseline) v.s. projection dimension. Second row: time v.s. projection dimension. 5 10 15 20 25 1.02 1.04 1.06 1.08 1.1 Fast_PCA Only_Randomized Only_Subspace Normal_PCA (a) NewsGroups 14 24 34 44 54 1 1.02 1.04 1.06 1.08 1.1 1.12 1.14 (b) MNIST 10 15 20 25 30 1.035 1.055 1.075 1.095 1.115 1.135 (c) BOWnytimes 10 15 20 25 30 1 1.02 1.04 1.06 1.08 1.1 (d) BOWpubmed 5 10 15 20 25 10 1 10 2 10 3 10 4 Fast_PCA Only_Subspace Only_Randomized Normal_PCA (e) NewsGroups 14 24 34 44 54 10 1 10 2 10 3 (f) MNIST 10 15 20 25 30 10 1 10 2 10 3 10 4 (g) BOWnytimes 10 15 20 25 30 10 4 (h) BOWpubmed Figure 3: k-means clustering. First row: cost (normalized by baseline) v.s. projection dimension. Second row: time v.s. projection dimension. 14 24 34 44 54 1.002 1.004 1.006 1.008 1.01 1.012 Fast_PCA Only_Subspace Only_Randomized Normal_PCA (a) MNIST 10 15 20 25 30 1.05 1.06 1.07 1.08 1.09 1.1 1.11 1.12 (b) YearPredictionMSD 10 15 20 25 30 1 1.002 1.004 1.006 1.008 1.01 1.012 1.014 (c) CTslices 14 24 34 44 54 1.001 1.0015 1.002 1.0025 1.003 (d) MNIST8m 14 24 34 44 54 10 0 10 1 10 2 10 3 Fast_PCA Only_Subspace Only_Randomized Normal_PCA (e) MNIST 10 15 20 25 30 10 0 10 1 10 2 10 3 (f) YearPredictionMSD 10 15 20 25 30 10 0 10 1 10 2 (g) CTslices 14 24 34 44 54 10 2 10 3 10 4 (h) MNIST8m Figure 4: PCR. First row: error (normalized by baseline) v.s. projection dimension. Second row: time v.s. projection dimension. 8 References [1] K. Bache and M. Lichman. UCI machine learning repository, 2013. [2] M.-F. Balcan, S. Ehrlich, and Y. Liang. Distributed k-means and k-median clustering on general communication topologies. In Advances in Neural Information Processing Systems, 2013. [3] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent dirichlet allocation. the Journal of machine Learning research, 2003. [4] C. Boutsidis, A. Zouzias, M. W. Mahoney, and P. Drineas. Stochastic dimensionality reduction for k-means clustering. CoRR, abs/1110.2897, 2011. [5] K. L. Clarkson and D. P. Woodruff. Low rank approximation and regression in input sparsity time. In Proceedings of the 45th Annual ACM Symposium on Theory of Computing, 2013. [6] M. Cohen, S. Elder, C. Musco, C. Musco, and M. Persu. Dimensionality reduction for k-means clustering and low rank approximation. arXiv preprint arXiv:1410.6801, 2014. [7] J. C. Corbett, J. Dean, M. Epstein, A. Fikes, C. Frost, J. Furman, S. Ghemawat, A. Gubarev, C. Heiser, P. Hochschild, et al. Spanner: Googles globally-distributed database. In Proceedings of the USENIX Symposium on Operating Systems Design and Implementation, 2012. [8] D. Feldman and M. Langberg. A unified framework for approximating and clustering data. In Proceedings of the Annual ACM Symposium on Theory of Computing, 2011. [9] D. Feldman, M. Schmidt, and C. Sohler. Turning big data into tiny data: Constant-size coresets for k-means, pca and projective clustering. In Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms, 2013. [10] M. Ghashami and J. M. Phillips. Relative errors for deterministic low-rank matrix approximations. In ACM-SIAM Symposium on Discrete Algorithms, 2014. [11] N. Halko, P.-G. Martinsson, and J. A. Tropp. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM review, 2011. [12] R. Kannan, S. S. Vempala, and D. P. Woodruff. Principal component analysis and higher correlations for distributed data. In Proceedings of the Conference on Learning Theory, 2014. [13] N. Karampatziakis and P. Mineiro. Combining structured and unstructured randomness in large scale pca. CoRR, abs/1310.6304, 2013. [14] Y.-A. Le Borgne, S. Raybaud, and G. Bontempi. Distributed principal component analysis for wireless sensor networks. Sensors, 2008. [15] D. D. Lee and H. S. Seung. Algorithms for non-negative matrix factorization. Advances in Neural Information Processing Systems, 2001. [16] S. V. Macua, P. Belanovic, and S. Zazo. Consensus-based distributed principal component analysis in wireless sensor networks. In Proceedings of the IEEE International Workshop on Signal Processing Advances in Wireless Communications, 2010. [17] X. Meng and M. W. Mahoney. Low-distortion subspace embeddings in input-sparsity time and applications to robust linear regression. In Proceedings of the Annual ACM symposium on Symposium on theory of computing, 2013. [18] S. Mitra, M. Agrawal, A. Yadav, N. Carlsson, D. Eager, and A. Mahanti. Characterizing webbased video sharing workloads. ACM Transactions on the Web, 2011. [19] J. Nelson and H. L. Nguyˆen. Osnap: Faster numerical linear algebra algorithms via sparser subspace embeddings. In IEEE Annual Symposium on Foundations of Computer Science, 2013. [20] C. Olston, J. Jiang, and J. Widom. Adaptive filters for continuous queries over distributed data streams. In Proceedings of the ACM SIGMOD International Conference on Management of Data, 2003. [21] Y. Qu, G. Ostrouchov, N. Samatova, and A. Geist. Principal component analysis for dimension reduction in massive distributed data sets. In Proceedings of IEEE International Conference on Data Mining, 2002. [22] T. Sarl´os. Improved approximation algorithms for large matrices via random projections. In IEEE Symposium on Foundations of Computer Science, 2006. 9
|
2014
|
124
|
5,208
|
Reputation-based Worker Filtering in Crowdsourcing Srikanth Jagabathula1 Lakshminarayanan Subramanian2,3 Ashwin Venkataraman2,3 1Department of IOMS, NYU Stern School of Business 2Department of Computer Science, New York University 3CTED, New York University Abu Dhabi sjagabat@stern.nyu.edu {lakshmi,ashwin}@cs.nyu.edu Abstract In this paper, we study the problem of aggregating noisy labels from crowd workers to infer the underlying true labels of binary tasks. Unlike most prior work which has examined this problem under the random worker paradigm, we consider a much broader class of adversarial workers with no specific assumptions on their labeling strategy. Our key contribution is the design of a computationally efficient reputation algorithm to identify and filter out these adversarial workers in crowdsourcing systems. Our algorithm uses the concept of optimal semi-matchings in conjunction with worker penalties based on label disagreements, to assign a reputation score for every worker. We provide strong theoretical guarantees for deterministic adversarial strategies as well as the extreme case of sophisticated adversaries where we analyze the worst-case behavior of our algorithm. Finally, we show that our reputation algorithm can significantly improve the accuracy of existing label aggregation algorithms in real-world crowdsourcing datasets. 1 Introduction The growing popularity of online crowdsourcing services (e.g. Amazon Mechanical Turk, CrowdFlower etc.) has made it easy to collect low-cost labels from the crowd to generate training datasets for machine learning applications. However, these applications remain vulnerable to noisy labels introduced either unintentionally by unreliable workers or intentionally by spammers and malicious workers [10, 11]. Recovering the underlying true labels in the face of noisy input in online crowdsourcing environments is challenging due to three key reasons: (a) Workers are often anonymous and transient and can provide random or even malicious labels (b) The reliabilities or reputations of the workers are often unknown (c) Each task may receive labels from only a (small) subset of the workers. Several existing approaches aim to address the above challenges under the following standard setup. There is a set T of binary tasks, each with a true label in {−1, 1}. A set of workers W are asked to label the tasks, and the assignment of the tasks to the workers can be represented by a bipartite graph with the workers on one side, tasks on the other side, and an edge connecting each worker to the set of tasks she is assigned. We term this the worker-task assignment graph. Workers are assumed to generate labels according to a probabilistic model - given a task t, a worker w provides the true label with probability pw. Note that every worker is assumed to label each task independent of other tasks. The goal then is to infer the underlying true labels of the tasks by aggregating the labels provided by the workers. Prior works based on the above model can be broadly classified into two categories: machine-learning based and linear-algebra based. The machine-learning approaches are typically based on variants of the EM algorithm [3, 16, 24, 14]. These algorithms perform well in most scenarios, but they lack any theoretical guarantees. More recently, linear-algebra based algorithms [9, 6, 2] have been proposed, which provide guarantees on the error in estimating the true labels of the tasks (under appropriate assumptions), and have also been shown to perform well on various real-world datasets. While existing work focuses on workers making random errors, recent work and anecdotal evidence have shown that worker labeling strategies that are common in practice do not fit the standard random model [19]. Specific examples include vote pollution attacks 1 on Digg [18], malicious behavior in social media [22, 12] and low-precision worker populations in crowdsourcing experiments [4]. In this paper, we aim to go beyond the standard random model and study the problem of inferring the true labels of tasks under a much broader class of adversarial worker strategies with no specific assumptions on their labeling pattern. For instance, deterministic labeling, where the workers always give the same label, cannot be captured by the standard random model. Also, malicious workers can employ arbitrary labeling patterns to degrade the accuracy of the inferred labels. Our goal is to accurately infer the true labels of the tasks without restricting workers’ strategies. Main results. Our main contribution is the design of a reputation algorithm to identify and filter out adversarial workers in online crowdsourcing systems. Specifically, we propose 2 computationally efficient algorithms to compute worker reputations using only the labels provided by the workers (see Algorithms 1 and 2), which are robust to manipulation by adversaries. We compute worker reputations by assigning penalties to a worker for each task she is assigned. The assigned penalty is higher for tasks on which there is “a lot” of disagreement with the other workers. The penalties are then aggregated in a “load-balanced” manner using the concept of optimal semi-matchings [7]. The reputation algorithm is designed to be used in conjunction with any of the existing label aggregation algorithms that are designed for the standard random worker model: workers with low reputations1 are filtered out and the aggregation algorithm is used on the remaining labels. As a result, our algorithm can be used to boost the performance of existing label aggregation algorithms. We demonstrate the effectiveness of our algorithm using a combination of strong theoretical guarantees and empirical results on real-world datasets. Our analysis considers three scenarios. First, we consider the standard setting in which workers are not adversarial and provide labels according to the random model. In this setting, we show that when the worker-task assignment graph is (l, r)regular, the reputation scores are proportional to the reliabilities of the workers (see Theorem 1), so that only unreliable workers are filtered out. As a result, our reputation scores are consistent with worker reliabilities in the absence of adversarial workers. The analysis becomes significantly complicated for more general graphs (a fact observed in prior works; see [2]); hence, we demonstrate improvements using simulations and experiments on real world datasets. Second, we evaluate the performance of our algorithm in the presence of workers who use deterministic labeling strategies (always label 1 or −1). For these strategies, when the worker-task assignment graph is (l, r)-regular, we show (see Theorem 2) that the adversarial workers receive lower reputations than their “honest” counterparts, provided honest workers have “high enough” reliabilities – the exact bound depends on the prevalence of tasks with true label 1, the fraction of adversarial workers and the average reliability of the honest workers. Third, we consider the case of sophisticated adversaries, i.e. worst-case adversarial workers whose goal is to maximize the number of tasks they affect (i.e. cause to get incorrect labels). Under this assumption, we provide bounds on the “damage” they can do: We prove that irrespective of the label aggregation algorithm (as long as it is agnostic to worker/task identities), there is a nontrivial minimum fraction of tasks whose true label is incorrectly inferred. This bound depends on the graph structure of the honest worker labeling pattern (see Theorem 3 for details). Our result is valid across different labeling patterns and a large class of label aggregation algorithms, and hence provides fundamental limits on the damage achievable by adversaries. Further, we propose a label aggregation algorithm utilizing the worker reputations computed in Algorithm 2 and prove the existence of an upper bound on the worst-case accuracy in inferring the true labels (see Theorem 4). Finally, using several publicly available crowdsourcing datasets (see Section 4), we show that our reputation algorithm: (a) can help in enhancing the accuracy of state-of-the-art label aggregation algorithms (b) is able to detect workers in these datasets who exhibit certain ‘non-random’ strategies. Additional Related Work: In addition to the references cited above, there have been works which use gold standard tasks, i.e. tasks whose true label is already known [17, 5, 11] to correct for worker bias. [8] proposed a way of quantifying worker quality by transforming the observed labels into soft posterior labels based on the estimated confusion matrix [3]. The authors in [13] propose an empirical Bayesian algorithm to eliminate workers who label randomly without looking at the particular task (called spammers), and estimate the consensus labels from the remaining workers. Both these 1As will become evident later, reputations are measures of how adversarial a worker is and are different from reliabilities of workers. 2 works use the estimated parameters to define “good workers” whereas we compute the reputation scores using only the labels provided by the workers. The authors in [20] focus on detecting specific kinds of spammers and then replace their labels with new workers. We consider all types of adversarial workers, not just spammers and don’t assume access to a pool of workers who can be asked to label the tasks. 2 Model and reputation algorithms Notation. Consider a set of tasks T having true labels in {1, −1}. Let yj denotes the true label of a task tj 2 T and suppose that the tasks are sampled from a population in which the prevalence of the positive tasks is γ 2 [0, 1], so that there is a fraction γ of tasks with true label 1. A set of workers W provide binary labels to the tasks in T . We let G denote the bipartite worker-task assignment graph where an edge between worker wi and task tj indicates that wi has labeled tj. Further, let wi(tj) denote the label provided by worker wi to task tj, where we set wi(tj) = 0 if worker wi did not label task tj. For a task tj, let Wj ⇢W denote the set of workers who labeled tj and likewise, for a worker wi, let Ti denote the set of tasks the worker has labeled. Denote by d+ j (resp. d− j ) the number of workers labeling task tj as 1 (resp. −1). Finally, let L 2 {1, 0, −1}|W |⇥|T | denote the matrix representing the labels assigned by the workers to the tasks, i.e. Lij = wi(tj). Given L, the goal is to infer the true labels yj of the tasks. Worker model. We consider the setting in which workers may be honest or adversarial. That is, W = H [ A with H \ A = ;. Honest workers are assumed to provide labels according to a probabilistic model: for task tj with true label yj, worker wi provides label yj with probability pi and −yj with probability 1−pi. Note that the parameter pi doesn’t depend on the particular task that the worker is labeling, so an honest worker labels each task independently. It is standard to define the reliability of an honest worker as µi = 2pi−1, so that we have µi 2 [−1, 1]. Further, we assume that the honest workers are sampled from a population with average reliability µ > 0. Adversaries, on the other hand, may adopt any arbitrary (deterministic or randomized) labeling strategy that cannot be described using the above probabilistic process. For instance, the adversary could always label all tasks as 1, irrespective of the true label. Another example is when the adversary decides her labels based on existing labels cast by other workers (assuming the adversary has access to such information). Note however, that adversarial workers need not always provide the incorrect labels. Essentially, the presence of such workers breaks the assumptions of the model and can adversely impact the performance of aggregation algorithms. Hence, our focus in this paper is on designing algorithms to identify and filter out such adversarial workers. Once this is achieved, we can use existing state-of-the-art label aggregation algorithms on the remaining labels to infer the true labels of the tasks. To identify these adversarial workers, we propose an algorithm for computing “reputation” or “trust” scores for each worker. More concretely, we assign penalties (in a suitable way) to every worker and higher the penalty, worse the reputation of the worker. First note that any task that has all 1 labels (or −1 labels) does not provide us with any information about the reliabilities of the workers who labeled the task. Hence, we focus on the tasks that have both 1 and −1 labels and we call this set the conflict set Tcs. Further, since we have no “side” information on the identities of workers, any reputation score computation must be based solely on the labels provided by the workers. We start with the following basic idea to compute reputation scores: a worker is penalized for every ‘conflict’ s/he is involved in (a task in the conflict set the worker has labeled on). This idea is motivated by the fact that in an ideal scenario, where all honest workers have a reliability µi = 1, a conflict indicates the presence of an adversary and the reputation score aims to capture a measure of the number of conflicts each worker is involved in: the higher the number of conflicts, the worse the reputation score. However, a straightforward aggregation of penalties for each worker may overpenalize (honest) workers who label several tasks. In order to overcome the issue of over-penalizing (honest) workers, we propose two techniques: (a) soft and (b) hard assignment of penalties. In the soft assignment of penalties (Algorithm 1), we assign a penalty of 1/d+ j to all workers who label 1 on task tj and 1/d− j to all workers who label −1 on task tj. Then, for each worker, we aggregate the penalties across all assigned tasks by taking the average. The above assignment of penalties implicitly rewards agreements by making the penalty inversely proportional to the number of other workers that agree with a worker. Further, taking the average normalizes for the number of tasks labeled by the worker. Since we expect the 3 honest workers to agree with the majority more often than not, we expect this technique to assign lower penalties to honest workers when compared to adversaries. The soft assignment of penalties can be shown to perform quite well in identifying low reliability and adversarial workers (refer to Theorems 1 and 2). However, it may still be subject to manipulation by more “sophisticated” adversaries who can adapt and modify their labeling strategy to target certain tasks and to inflate the penalty of specific honest workers. In fact for such worst-case adversaries, we can show that (Theorem 3) given any honest worker labeling pattern, there exists a lower bound on the number of tasks whose true label cannot be inferred correctly, by any label aggregation algorithm. To address the case of these sophisticated adversaries, we propose a hard penalty assignment scheme (Algorithm 2) where the key idea is not to distribute the penalty evenly across all workers; but to only choose two workers to penalize per conflict task: one “representative” worker among those who labeled 1 and another “representative” worker among those who labeled −1. While choosing such workers, the goal is to pick these representative workers in a load-balanced manner to “spread” the penalty across all workers, so that it is not concentrated on one/few workers. The final penalty of each worker is the sum of the accrued penalties across all the (conflict) tasks assigned to the worker. Intuitively, such hard assignment of penalties will penalize workers with higher degrees and many conflicts (who are potential ‘worst-case’ adversaries), limiting their impact. We use the concept of optimal semi-matchings [7] on bipartite graphs to distribute penalties in a load balanced manner, which we briefly discuss here. For a bipartite graph B = (U, V, E), a semimatching in B is a set of edges M ✓E such that each vertex in V is incident to exactly one edge in M (note that vertices in U could be incident to multiple edges in M). A semi-matching generalizes the notion of matchings on bipartite graphs. To define an optimal semi-matching, we introduce a cost function for a semi-matching - for each u 2 U, let degM(u) denote the number of edges in M that are incident to u and let costM(u) be defined as costM(u) = PdegM(u) i=1 i = degM(u)(degM(u)+1) 2 . An optimal semi-matching then, is one which minimizes P u2U costM(u). This notion of cost is motivated by the load balancing problem for scheduling tasks on machines (refer to [7] for further details). Intuitively, an optimal semi-matching fairly matches the V -vertices across the U-vertices so that the maximum “load” on any U-vertex is minimized. Algorithm 1 SOFT PENALTY 1: Input: W, T and L 2: For every task tj 2 Tcs, assign penalty sij to each worker wi 2 Wj as follows: sij = 1 d+ j if Lij = 1 sij = 1 d− j if Lij = −1 3: Output: Penalty of worker wi pen(wi) = P tj2Ti\ Tcs sij |Ti \ Tcs| Algorithm 2 HARD PENALTY 1: Input: W, T and L 2: Create a bipartite graph Bcs as follows: (i) Each worker wi 2 W is represented by a node on the left (ii) Each task tj 2 Tcs is represented by two nodes on the right t+ j and t− j (iii) Add the edge (wi, t+ j ) if Lij = 1 or edge (wi, t− j ) if Lij = −1. 3: Compute an optimal semi-matching OSM on Bcs and let di ( degree of worker wi in OSM 4: Output: Penalty of worker wi pen(wi) = di 3 Theoretical Results Soft penalty. We focus on (l, r)-regular worker-task assignment graphs in which every worker is assigned l tasks and every object is labeled by r workers. The performance of our reputation algorithms depend on the reliabilities of the workers as well as the true labels of the tasks. Hence, we consider the following probabilistic model: for a given (l, r)-regular worker-task assignment graph G, the reliabilities of the workers and the true labels of tasks are sampled independently (from distributions described in Section 2). We then analyze the performance of our algorithms as the task degree r (and hence number of workers |W|) goes to infinity. Specifically, we establish the following results (the proofs of all theorems are in the supplementary material). Theorem 1. Suppose there are no adversarial workers, i.e A = ; and that the worker-task assignment graph G is (l, r)-regular. Then, with high probability as r ! 1, for any pair of workers wi and wi0, µi < µi0 =) pen(wi) > pen(wi0), i.e. higher reliability workers are assigned lower penalties by Algorithm 1. 4 The probability in the above theorem is according to the model described above. Note that the theorem justifies our definition of the reputation scores by establishing their consistency with worker reliabilities in the absence of adversarial workers. Next, consider the setting in which adversarial workers adopt the following uniform strategy: label 1 on all assigned tasks (the −1 case is symmetric). Theorem 2. Suppose that the worker-task assignment graph G is (l, r)-regular. Let the probability of an arbitrary worker being honest be q and suppose each adversary adopts the uniform strategy in which she labels 1 on all the tasks assigned to her. Denote an arbitrary honest worker as hi and any adversary as a. Then, with high probability as r ! 1, we have 1. If γ = 1 2 and µi = 1, then pen(hi) < pen(a) if and only if q > 1 1+µ 2. If γ = 1 2 and q > 1 1+µ, then pen(hi) < pen(a) if and only if µi ≥(2 −q)(1 −q −q2µ2) −q2µ2 (2 −q)q + q2µ2 The above theorem establishes that when adversaries adopt the uniform strategy, the soft-penalty algorithm assigns lower penalties to honest workers whose reliability excess a threshold, as long as the fraction of honest workers is “large enough”. Although not stated, the result above immediately extends (with a modified lower bound for µi) to the case when γ > 1/2, which corresponds to adversaries adopting smart strategies by labeling based on the prevalence of positive tasks. Sophisticated adversaries. Numerous real-world incidents show evidence of malicious worker behavior in online systems [18, 22, 12]. Moreover, attacks on the training process of ML models have also been studied [15, 1]. Recent work [21] has also shown the impact of powerful adversarial attacks by administrators of crowdturfing (malicious crowdsourcing) sites. Motivated by these examples, we consider sophisticated adversaries: Definition 1. Sophisticated adversaries are computationally unbounded and colluding. Further, they have knowledge of the labels provided by the honest workers and their goal is to maximize the number of tasks whose true label is incorrectly identified. We now raise the following question: In the presence of sophisticated adversaries, does there exist a fundamental limit on the number of tasks whose true label can be correctly identified, irrespective of the aggregation algorithm employed to aggregate the worker labels? In order to answer the above question precisely, we introduce some notation. Let n = |W| and m = |T |. Then, we represent any label aggregation algorithm as a decision rule R : L ! {1, −1}m, which maps the observed labeling matrix L to a set of output labels for each task. Because of the absence of any auxiliary information about the workers or the tasks, the class of decision rules, say C, is invariant to permutations of the identities of workers and/or tasks. More precisely, C denotes the class of decision rules that satisfy R(PLQ) = R(L)Q, for any n ⇥n permutation matrix P and m⇥m permutation matrix Q. We say that a task is affected if the decision rule outputs the incorrect label for the task. We define the quality of a decision rule R(·) as the worst-case number of affected tasks over all possible true labelings and adversary strategies with a fixed honest worker labeling pattern. Fixing the honest worker labeling pattern allows isolation of the effect of the adversary strategy on the accuracy of the decision rule. Considering the worst-case over all possible true labels makes the metric robust to ground-truth assignments, which are typically application specific. Next to formally define the quality, let BH denote the honest worker-task assignment graph and y = (y1, y2, . . . , ym) denote the vector of true labels for the tasks. Note that since the number of affected tasks also depends on the actual honest worker labels, we further assume that all honest workers have reliability µi = 1, i.e they always label correctly. This assumption allows us to attribute any mis-identification of true labels to the presence of adversaries because, otherwise, in the absence of any adversaries, the true labels of all the tasks can be trivially identified. Finally, let Sk denote the strategy space of k adversaries, where each strategy σ 2 Sk specifies the k ⇥m label matrix provided by the adversaries. Since we do not restrict the adversary strategy in any way, it follows that Sk = {−1, 0, 1}k⇥m. The quality of a decision rule is then defined as A↵(R, BH, k) = max σ2Sk,y2{1,−1}m """ n tj 2 T : Ry,σ tj 6= yj) o""" , 5 where Ry,σ t 2 {1, −1} is the label output by the decision rule R for task t when the true label vector is y and the adversary strategy is σ. Note that our notation A↵(R, BH, k) makes the dependence of the quality measure on the honest worker-task assignment graph BH and the number of adversaries k explicit. We answer the question raised above in the affirmative, i.e. there does exist a fundamental limit on identification. In the theorem below, PreIm(T 0) is the set of honest workers who label atleast one task in T 0. Theorem 3. Suppose that k = |A| and µh = 1 for all honest workers h 2 H. Then, given any honest worker-task assignment graph BH, there exists an adversary strategy σ⇤2 Sk that is independent of any decision rule R 2 C such that L max y2{−1,1}m A↵(R, σ⇤, y) 8R 2 C, where L = 1 2 max T 0✓T : |PreIm(T 0)|k |T 0| , and A↵(R, σ⇤, y) denotes the number of affected tasks under adversary strategy σ⇤, decision rule R, and true label vector y (with the assumption that max over an empty set is zero). We describe the main idea of the proof which proceeds in two steps: (i) we provide an explicit construction of an adversary strategy σ⇤that depends on BH and y, and (ii) we show the existence of another true labeling ˆy such that R outputs exactly the same labels in both scenarios. The adversary labeling strategy we construct uses the idea of indistinguishability, which captures the fact that by carefully choosing their labels, the adversaries can render themselves indistinguishable from honest workers. Specifically, in the simple case when there is only one honest worker, the adversary simply labels the opposite of the honest worker on all assigned tasks, so that each task has two labels of opposite parity. It can be argued that since there is no other information, it is impossible for any decision rule R 2 C to distinguish the honest worker from the adversary and hence identify the true label of any task (better than a random guess). We extend this to the general case, where the adversary “targets” atmost k honest workers and derives a strategy based on the subgraph of BH restricted to the targeted workers. The resultant strategy can be shown to result in incorrect labels for atleast L tasks for some true labeling of the tasks. Hard penalty. We now analyze the performance of the hard penalty-based reputation algorithm in the presence of sophisticated adversarial workers. For the purposes of the theorem, we consider a natural extension of our reputation algorithm to also perform label aggregation (see figure 1). Theorem 4. Suppose that k = |A| and µi = 1 for each honest worker, i.e an honest worker always provides the correct label. Further, let d1 ≥d2 ≥· · · ≥d|H| denote the degrees of the honest workers in the optimal semi-matching on BH. For any true labeling of the tasks and under the penalty-based label aggregation algorithm (with the convention that di = 0 for i > |H|) : 1. There exists an adversary strategy σ⇤such that the number of tasks affected ≥Pk−1 i=1 di. 2. No adversary strategy can affect more than U tasks where (a) U = Pk i=1 di , when atmost one adversary provides correct labels (b) U = P2k i=1 di , in the general case A few remarks are in order. First, it can be shown [7] that for optimal semi-matchings, the degree sequence is unique and therefore, the bounds in the theorem above are uniquely defined given BH. Also, the assumption that µi = 1 is required for analytical tractability, proving theoretical guarantees in crowd-sourced settings (even without adversaries) for general graph structures is notoriously hard [2]. Note that the result of Theorem 4 provides both a lower and upper bound for the number of tasks that can be affected by k adversaries when using the penalty-based aggregation algorithm. The characterization we obtain is reasonably tight for the case when atmost 1 adversary provides correct labels; in this case the gap between the upper and lower bounds is dk, which can be “small” for k large enough. However, our characterization is loose in the general case when adversaries can label arbitrarily; here the gap is P2k i=k di which we attribute to our proof technique and conjecture that the upper bound of Pk i=1 di also applies in the more general case. 4 Experiments In this section, we evaluate the performance of our reputation algorithms on both synthetic and real datasets. We consider the following popular label aggregation algorithms: (a) simple majority vot6 Random Malicious Uniform Low High Low High Low High MV 9.9 7.9 16.8 15.6 26.0 15.0 EM -1.9 6.3 -1.6 -49.4 -1.2 -9.1 KOS -4.3 13.1 -8.3 -98.7 -6.5 12.9 KOS+ -3.9 7.3 -8.3 -69.6 -6.0 10.7 PRECISION 81.7 82.1 92.5 59.4 80.8 62.4 BEST MV-SOFT MV-HARD MV-SOFT KOS MV-SOFT MV-HARD PENALTY-BASED AGGREGATION wt ( worker that task t is mapped to in OSM in Algorithm 2 Output y(t) = 1 if dwt+ < dwt− y(t) = −1 if dwt+ > dwt−and y(t) = 0 otherwise (here y refers to the label of the task and dw refers to the degree of worker w in OSM) Figure 1: Left: Percentage decrease in incorrect tasks on synthetic data (negative indicates increase in incorrect tasks). We implemented both SOFT and HARD and report the best outcome. Also reported is the precision when removing 15 workers with the highest penalties. The columns specify the three types of adversaries and High/Low indicates the degree bias of the adversaries. The probability that a worker is honest q was set to 0.7 and the prevalence γ of positive tasks was set to 0.5. The numbers reported are an average over 100 experimental runs. The last row lists the combination with the best accuracy in each case. Right: The penalty-based label aggregation algorithm. ing MV (b) the EM algorithm [3] (c) the BP-based KOS algorithm [9] and (d) KOS+, a normalized version of KOS that is amenable for arbitrary graphs (KOS is designed for random regular graphs), and compare their accuracy in inferring the true labels of the tasks, when implemented in conjunction with our reputation algorithms. We implemented iterative versions of Algorithms 1(SOFT) and 2(HARD), where in each iteration we filtered out the worker with the highest penalty and recomputed penalties for the remaining workers. Synthetic Dataset. We analyzed the performance of our soft penalty-based reputation algorithm on (l, r)-regular graphs in section 3. In many practical scenarios, however, the worker-task assignment graph forms organically where the workers decide which tasks to label on. To consider this case, we simulated a setup of 100 workers with a power-law distribution for worker degrees to generate the bipartite worker-task assignment graph. We assume that an honest worker always labels correctly (the results are qualitatively similar when honest workers make errors with small probability) and consider three notions of adversaries: (a) random - who label each task 1 or −1 with prob. 1/2 (b) malicious - who always label incorrectly and (c) uniform - who label 1 on all tasks. Also, we consider both cases - one where the adversaries are biased to have high degrees and the other where they have low degrees. Further, we arbitrarily decided to remove 15% of the workers with the highest penalties and we define precision as the percentage of workers filtered who were adversarial. Figure 1 shows the performance improvement of the different benchmarks in the presence of our reputation algorithm. We make a few observations. First, we are successful in identifying random adversaries as well as low-degree malicious and uniform adversaries (precision > 80%). This shows that our reputation algorithms also perform well when worker-task assignment graphs are non-regular, complementing the theoretical results (Theorems 1 and 2) for regular graphs. Second, our filtering algorithm can result in significant reduction (upto 26%) in the fraction of incorrect tasks. In fact, in 5 out of 6 cases, the best performing algorithm incorporates our reputation algorithm. Note that since 15 workers are filtered out, labels from fewer workers are used to infer the true labels of the tasks. Despite using fewer labels, we improve performance because the high precision of our algorithms results in mostly adversaries being filtered out. Third, we note that when the adversaries are malicious and have high degrees, the removal of 15 workers degrades the performance of the KOS (and KOS+) and EM algorithms. We attribute this to the fact that while KOS and EM are able to automatically invert the malicious labels, we discard these labels which hurts performance, since the adversaries have high degrees. Finally, note that the SOFT (HARD) penalty algorithm tends to perform better when adversaries are biased towards low (high) degrees, and this insight can be used to aid the choice of the reputation algorithm to be employed in different scenarios. Real Datasets. Next, we evaluated our algorithm on some standard datasets: (a) TREC2: a collection of topic-document pairs labeled as relevant or non-relevant by workers on AMT. We consider two versions: stage2 and task2. (b) NLP [17]: consists of annotations by AMT workers for different NLP tasks (1) rte - which provides binary judgments for textual entailment, i.e. whether one 2http://sites.google.com/site/treccrowd/home 7 Dataset MV EM KOS KOS+ Base Soft Hard Base Soft Hard Base Soft Hard Base Soft Hard rte 91.9 92.1(8) 92.5(3) 92.7 92.7 93.3(9) 49.7 88.8(9) 91.6(10) 91.3 92.7(8) 92.8(10) temp 93.9 93.9 94.3(5) 94.1 94.1 94.1 56.9 69.2(4) 93.7(3) 93.9 94.3(7) 94.3(1) bluebird 75.9 75.9 75.9 89.8 89.8 89.8 72.2 75.9(3) 72.2 72.2 75.9(3) 72.2 stage2 74.1 74.1 81.4(3) 64.7 65.3(6) 78.9(2) 74.5 74.5 75.2(3) 75.5 76.6(2) 77.2(3) task2 64.3 64.3 68.4(10) 66.8 66.8 67.3(9) 57.4 57.4 66.7(10) 59.3 59.4(4) 67.9(9) aggregate 80.0 80.0 82.5 81.6 81.7 84.7 62.1 73.2 79.9 78.4 79.8 80.9 Table 1: Percentage accuracy of benchmark algorithms when combined with our reputation algorithms. For each benchmark, the best performing combination is shown in bold. The number in the parentheses represents the number of workers filtered by our reputation algorithm (an absence indicates that no performance improvement was achieved while removing upto 10 workers with the highest penalties). sentence can be inferred from another (2) temp - which provides binary judgments for temporal ordering of events. (c) bluebird [23] contains judgments differentiating two kinds of birds in an image. Table 1 reports the best accuracy achieved when upto 10 workers are filtered using our iterative reputation algorithms. The main conclusion we draw is that our reputation algorithms are able to boost the performance of state-of-the-art aggregation algorithms by a significant amount across the datasets: the average improvement for MV and KOS+ is 2.5%, EM is 3% and for KOS is almost 18%, when using the hard penalty-based reputation algorithm. Second, we can elevate the performance of simpler algorithms such as KOS and MV to the levels of the more general (and in some cases, complicated) EM algorithm. The KOS algorithm for instance, is not only easier to implement, but also has tight theoretical guarantees when the underlying assignment graph is sparse random regular and further is robust to different initializations [9]. The modified version KOS+ can be used in graphs where worker degrees are skewed, but we are still able to enhance its accuracy. Our results provide evidence for the fact that existing random models are inadequate in capturing the behavior of workers in real-world datasets, necessitating the need for a more general approach. Third, note that the hard penalty-based algorithm outperforms the soft version across the datasets. Since the hard penalty algorithm works well when adversaries have higher degrees (a fact noticed in the simulation results above), this suggests the presence of high-degree adversarial workers in theses datasets. Finally, our algorithm was successful in identifying the following types of “adversaries”: (1) uniform workers who always label 1 or −1 (in temp, task2, stage2), (2) malicious workers who mostly label incorrectly (in bluebird, stage2) and (3) random workers who label each task independent of its true label (such workers were identified as “spammers” in [13]). Therefore, our algorithm is able to identify a broad set of adversary strategies in addition to those detected by existing techniques. 5 Conclusions This paper analyzes the problem of inferring true labels of tasks in crowdsourcing systems against a broad class of adversarial labeling strategies. The main contribution is the design of a reputationbased worker filtering algorithm that uses a combination of disagreement-based penalties and optimal semi-matchings to identify adversarial workers. We show that our reputation scores are consistent with the existing notion of worker reliabilities and further can identify adversaries that employ deterministic labeling strategies. Empirically, we show that our algorithm can be applied in real crowd-sourced datasets to enhance the accuracy of existing label aggregation algorithms. Further, we analyze the scenario of worst-case adversaries and establish lower bounds on the minimum “damage” achievable by the adversaries. Acknowledgments We thank the anonymous reviewers for their valuable feedback. Ashwin Venkataraman was supported by the Center for Technology and Economic Development (CTED). References [1] B. Biggio, B. Nelson, and P. Laskov. Poisoning attacks against support vector machines. arXiv preprint arXiv:1206.6389, 2012. 8 [2] N. Dalvi, A. Dasgupta, R. Kumar, and V. Rastogi. Aggregating crowdsourced binary ratings. In Proceedings of the 22nd international conference on World Wide Web, pages 285–294. International World Wide Web Conferences Steering Committee, 2013. [3] A. P. Dawid and A. M. Skene. Maximum likelihood estimation of observer error-rates using the em algorithm. Applied Statistics, pages 20–28, 1979. [4] G. Demartini, D. E. Difallah, and P. Cudr´e-Mauroux. Zencrowd: leveraging probabilistic reasoning and crowdsourcing techniques for large-scale entity linking. In Proceedings of the 21st international conference on World Wide Web, pages 469–478. ACM, 2012. [5] J. S. Downs, M. B. Holbrook, S. Sheng, and L. F. Cranor. Are your participants gaming the system?: screening mechanical turk workers. In Proceedings of the 28th international conference on Human factors in computing systems, pages 2399–2402. ACM, 2010. [6] A. Ghosh, S. Kale, and P. McAfee. Who moderates the moderators?: crowdsourcing abuse detection in user-generated content. In Proceedings of the 12th ACM conference on Electronic commerce, pages 167–176. ACM, 2011. [7] N. J. Harvey, R. E. Ladner, L. Lov´asz, and T. Tamir. Semi-matchings for bipartite graphs and load balancing. In Algorithms and data structures, pages 294–306. Springer, 2003. [8] P. G. Ipeirotis, F. Provost, and J. Wang. Quality management on amazon mechanical turk. In Proceedings of the ACM SIGKDD workshop on human computation, pages 64–67. ACM, 2010. [9] D. R. Karger, S. Oh, and D. Shah. Iterative learning for reliable crowdsourcing systems. Neural Information Processing Systems, 2011. [10] A. Kittur, E. H. Chi, and B. Suh. Crowdsourcing user studies with mechanical turk. In Proceedings of the SIGCHI conference on human factors in computing systems, pages 453–456. ACM, 2008. [11] J. Le, A. Edmonds, V. Hester, and L. Biewald. Ensuring quality in crowdsourced search relevance evaluation: The effects of training question distribution. [12] K. Lee, P. Tamilarasan, and J. Caverlee. Crowdturfers, campaigns, and social media: tracking and revealing crowdsourced manipulation of social media. In 7th international AAAI conference on weblogs and social media (ICWSM), Cambridge, 2013. [13] V. C. Raykar and S. Yu. Eliminating spammers and ranking annotators for crowdsourced labeling tasks. The Journal of Machine Learning Research, 13:491–518, 2012. [14] V. C. Raykar, S. Yu, L. H. Zhao, G. H. Valadez, C. Florin, L. Bogoni, and L. Moy. Learning from crowds. The Journal of Machine Learning Research, 99:1297–1322, 2010. [15] B. I. Rubinstein, B. Nelson, L. Huang, A. D. Joseph, S.-h. Lau, S. Rao, N. Taft, and J. Tygar. Antidote: understanding and defending against poisoning of anomaly detectors. In Proceedings of the 9th ACM SIGCOMM conference on Internet measurement conference, pages 1–14. ACM, 2009. [16] P. Smyth, U. Fayyad, M. Burl, P. Perona, and P. Baldi. Inferring ground truth from subjective labelling of venus images. Advances in neural information processing systems, pages 1085–1092, 1995. [17] R. Snow, B. O’Connor, D. Jurafsky, and A. Y. Ng. Cheap and fast—but is it good?: evaluating non-expert annotations for natural language tasks. In Proceedings of the conference on empirical methods in natural language processing, pages 254–263. Association for Computational Linguistics, 2008. [18] N. Tran, B. Min, J. Li, and L. Subramanian. Sybil-resilient online content voting. In Proceedings of the 6th USENIX symposium on Networked systems design and implementation, pages 15–28. USENIX Association, 2009. [19] J. Vuurens, A. P. de Vries, and C. Eickhoff. How much spam can you take? an analysis of crowdsourcing results to increase accuracy. In Proc. ACM SIGIR Workshop on Crowdsourcing for Information Retrieval (CIR11), pages 21–26, 2011. [20] J. B. Vuurens and A. P. de Vries. Obtaining high-quality relevance judgments using crowdsourcing. Internet Computing, IEEE, 16(5):20–27, 2012. [21] G. Wang, T. Wang, H. Zheng, and B. Y. Zhao. Man vs. machine: Practical adversarial detection of malicious crowdsourcing workers. In 23rd USENIX Security Symposium, USENIX Association, CA, 2014. [22] G. Wang, C. Wilson, X. Zhao, Y. Zhu, M. Mohanlal, H. Zheng, and B. Y. Zhao. Serf and turf: crowdturfing for fun and profit. In Proceedings of the 21st international conference on World Wide Web, pages 679–688. ACM, 2012. [23] P. Welinder, S. Branson, S. Belongie, and P. Perona. The multidimensional wisdom of crowds. Advances in Neural Information Processing Systems, 23:2424–2432, 2010. [24] J. Whitehill, P. Ruvolo, T. Wu, J. Bergsma, and J. Movellan. Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. Advances in Neural Information Processing Systems, 22(2035-2043):7–13, 2009. 9
|
2014
|
125
|
5,209
|
Large Scale Canonical Correlation Analysis with Iterative Least Squares Yichao Lu University of Pennsylvania yichaolu@wharton.upenn.edu Dean P. Foster Yahoo Labs, NYC dean@foster.net Abstract Canonical Correlation Analysis (CCA) is a widely used statistical tool with both well established theory and favorable performance for a wide range of machine learning problems. However, computing CCA for huge datasets can be very slow since it involves implementing QR decomposition or singular value decomposition of huge matrices. In this paper we introduce L-CCA , a iterative algorithm which can compute CCA fast on huge sparse datasets. Theory on both the asymptotic convergence and finite time accuracy of L-CCA are established. The experiments also show that L-CCA outperform other fast CCA approximation schemes on two real datasets. 1 Introduction Canonical Correlation Analysis (CCA) is a widely used spectrum method for finding correlation structures in multi-view datasets introduced by [15]. Recently, [3, 9, 17] proved that CCA is able to find the right latent structure under certain hidden state model. For modern machine learning problems, CCA has already been successfully used as a dimensionality reduction technique for the multi-view setting. For example, A CCA between the text description and image of the same object will find common structures between the two different views, which generates a natural vector representation of the object. In [9], CCA is performed on a large unlabeled dataset in order to generate low dimensional features to a regression problem where the size of labeled dataset is small. In [6, 7] a CCA between words and its context is implemented on several large corpora to generate low dimensional vector representations of words which captures useful semantic features. When the data matrices are small, the classical algorithm for computing CCA involves first a QR decomposition of the data matrices which pre whitens the data and then a Singular Value Decomposition (SVD) of the whitened covariance matrix as introduced in [11]. This is exactly how Matlab computes CCA. But for huge datasets this procedure becomes extremely slow. For data matrices with huge sample size [2] proposed a fast CCA approach based on a fast inner product preserving random projection called Subsampled Randomized Hadamard Transform but it’s still slow for datasets with a huge number of features. In this paper we introduce a fast algorithm for finding the top kcca canonical variables from huge sparse data matrices (a single multiplication with these sparse matrices is very fast) X 2 n ⇥p1 and Y 2 n ⇥p2 the rows of which are i.i.d samples from a pair of random vectors. Here n ≫p1, p2 ≫1 and kcca is relatively small number like 50 since the primary goal of CCA is to generate low dimensional features. Under this set up, QR decomposition of a n ⇥p matrix cost O(np2) which is extremely slow even if the matrix is sparse. On the other hand since the data matrices are sparse, X>X and Y>Y can be computed very fast. So another whitening strategy is to compute (X>X)−1 2 , (Y>Y)−1 2 . But when p1, p2 are large this takes O(max{p3 1, p3 2}) which is both slow and numerically unstable. 1 The main contribution of this paper is a fast iterative algorithm L-CCA consists of only QR decomposition of relatively small matrices and a couple of matrix multiplications which only involves huge sparse matrices or small dense matrices. This is achieved by reducing the computation of CCA to a sequence of fast Least Square iterations. It is proved that L-CCA asymptotically converges to the exact CCA solution and error analysis for finite iterations is also provided. As shown by the experiments, L-CCA also has favorable performance on real datasets when compared with other CCA approximations given a fixed CPU time. It’s worth pointing out that approximating CCA is much more challenging than SVD(or PCA). As suggested by [12, 13], to approximate the top singular vectors of X, it suffices to randomly sample a small subspace in the span of X and some power iteration with this small subspace will automatically converge to the directions with top singular values. On the other hand CCA has to search through the whole X Y span in order to capture directions with large correlation. For example, when the most correlated directions happen to live in the bottom singular vectors of the data matrices, the random sample scheme will miss them completely. On the other hand, what L-CCA algorithm doing intuitively is running an exact search of correlation structures on the top singular vectors and an fast gradient based approximation on the remaining directions. 2 Background: Canonical Correlation Analysis 2.1 Definition Canonical Correlation Analysis (CCA) can be defined in many different ways. Here we use the definition in [9, 17] since this version naturally connects CCA with the Singular Value Decomposition (SVD) of the whitened covariance matrix, which is the key to understanding our algorithm. Definition 1. Let X 2 n ⇥p1 and Y 2 n ⇥p2 where the rows are i.i.d samples from a pair of random vectors. Let Φx 2 p1 ⇥p1, Φy 2 p2 ⇥p2 and use φx,i, φy,j to denote the columns of Φx, Φy respectively. Xφx,i, Yφy,j are called canonical variables if φ> x,iX>Yφy,j = ⇢di if i = j 0 if i 6= j φ> x,iX>Xφx,j = ⇢1 if i = j 0 if i 6= j φ> y,iY>Yφy,j = ⇢1 if i = j 0 if i 6= j Xφx,i, Yφy,i is the ith pair of canonical variables and di is the ith canonical correlation. 2.2 CCA and SVD First introduce some notation. Let Cxx = X>X Cyy = Y>Y Cxy = X>Y For simplicity assume Cxx and Cyy are full rank and Let ˜Cxy = C −1 2 xx CxyC −1 2 yy The following lemma provides a way to compute the canonical variables by SVD. Lemma 1. Let ˜Cxy = UDV> be the SVD of ˜Cxy where ui, vj denote the left, right singular vectors and di denotes the singular values. Then XC −1 2 xx ui, YC −1 2 yy vj are the canonical variables of the X, Y space respectively. Proof. Plug XC −1 2 xx ui, YC −1 2 yy vj into the equations in Definition 1 directly proves lemma 1 As mentioned before, we are interested in computing the top kcca canonical variables where kcca ⌧ p1, p2. Use U1, V1 to denote the first kcca columns of U, V respectively and use U2, V2 for the remaining columns. By lemma 1, the top kcca canonical variables can be represented by XC −1 2 xx U1 and YC −1 2 yy V1. 2 Algorithm 1 CCA via Iterative LS Input : Data matrix X 2 n ⇥p1 ,Y 2 n ⇥p2. A target dimension kcca. Number of orthogonal iterations t1 Output : Xkcca 2 n ⇥kcca, Ykcca 2 n ⇥kcca consist of top kcca canonical variables of X and Y. 1.Generate a p1 ⇥kcca dimensional random matrix G with i.i.d standard normal entries. 2.Let X0 = XG 3. for t = 1 to t1 do Yt = HYXt−1 where HY = Y(Y>Y)−1Y> Xt = HXYt where HX = X(X>X)−1X> end for 4.Xkcca = QR(Xt1), Ykcca = QR(Yt1) Function QR(Xt) extract an orthonormal basis of the column space of Xt with QR decomposition 3 Compute CCA by Iterative Least Squares Since the top canonical variables are connected with the top singular vectors of ˜Cxy which can be compute with orthogonal iteration [10] (it’s called simultaneous iteration in [21]), we can also compute CCA iteratively. A detailed algorithm is presented in Algorithm1: The convergence result of Algorithm 1 is stated in the following theorem: Theorem 1. Assume |d1| > |d2| > |d3|... > |dkcca+1| and U> 1 C 1 2xxG is non singular (this will hold with probability 1 if the elements of G are i.i.d Gaussian). The columns of Xkcca and Ykcca will converge to the top kcca canonical variables of X and Y respectively if t1 ! 1. Theorem 1 is proved by showing it’s essentially an orthogonal iteration [10, 21] for computing the top kcca eigenvectors of A = ˜Cxy ˜C> xy. A detailed proof is provided in the supplementary materials. 3.1 A Special Case When X Y are sparse and Cxx, Cyy are diagonal (like the Penn Tree Bank dataset in the experiments), Algorithm 1 can be implemented extremely fast since we only need to multiply with sparse matrices or inverting huge but diagonal matrices in every iteration. QR decomposition is performed not only in the end but after every iteration for numerical stability issues (here we only need to QR with matrices much smaller than X, Y). We call this fast version D-CCA in the following discussions. When Cxx, Cyy aren’t diagonal, computing matrix inverse becomes very slow. But we can still run D-CCA by approximating (X>X)−1, (Y>Y)−1 with (diag(X>X))−1, (diag(Y>Y))−1 in algorithm 1 when speed is a concern. But this leads to poor performance when Cxx, Cyy are far from diagonal as shown by the URL dataset in the experiments. 3.2 General Case Algorithm 1 reduces the problem of CCA to a sequence of iterative least square problems. When X, Y are huge, solving LS exactly is still slow since it consists inverting a huge matrix but fast LS methods are relatively well studied. There are many ways to approximate the LS solution by optimization based methods like Gradient Descent [1, 23], Stochastic Gradient Descent [16, 4] or by random projection and subsampling based methods like [8, 5]. A fast approximation to the top kcca canonical variables can be obtained by replacing the exact LS solution in every iteration of Algorithm 1 with a fast approximation. Here we choose LING [23] which works well for large sparse design matrices for solving the LS problem in every CCA iteration. The connection between CCA and LS has been developed under different setups for different purposes. [20] shows that CCA in multi label classification setting can be formulated as an LS problem. [22] also formulates CCA as a recursive LS problem and builds an online version based on this observation. The benefit we take from this iterative LS formulation is that running a fast LS ap3 Algorithm 2 LING Input : X 2 n ⇥p ,Y 2 n ⇥1. kpc, number of top left singular vectors selected. t2, number of iterations in Gradient Descent. Output : ˆY 2 n ⇥1, which is an approximation to X(X>X)−1X>Y 1. Compute U1 2 n ⇥kpc, top kpc left singular vector of X by randomized SVD (See supplementary materials for detailed description). 2. Y1 = U1U > 1 X. 3.Compute the residual. Yr = Y −Y1 4.Use gradient descent initial at the 0 vector (see supplementary materials for detailed description) to approximately solve the LS problem minβr2Rp kXβr −Yrk2. Use βr,t2 to denote the solution after t2 gradient iterations. 5. ˆY = Y1 + Xβr,t2. proximation in every iteration will give us a fast CCA approximation with both provable theoretical guarantees and favorable experimental performance. 4 Algorithm In this section we introduce L-CCA which is a fast CCA algorithm based on Algorithm 1. 4.1 LING: a Gradient Based Least Square Algorithm First we need to introduce the fast LS algorithm LING as mentioned in section 3.2 which is used in every orthogonal iteration of L-CCA . Consider the LS problem: β⇤= arg min β2Rp{kXβ −Y k2} for X 2 n ⇥p and Y 2 n ⇥1. For simplicity assume X is full rank. Xβ⇤= X(X>X)−1X>Y is the projection of Y onto the column space of X. In this section we introduce a fast algorithm LING to approximately compute Xβ⇤without formulating (X>X)−1 explicitly which is slow for large p. The intuition of LING is as follows. Let U1 2 n ⇥kpc (kpc ⌧p) be the top kpc left singular vectors of X and U2 2 n ⇥(p −kpc) be the remaining singular vectors. In LING we decompose Xβ⇤into two orthogonal components, Xβ⇤= U1U > 1 Y + U2U > 2 Y the projection of Y onto the span of U1 and the projection onto the span of U2. The first term can be computed fast given U1 since kpc is small. U1 can also be computed fast approximately with the randomized SVD algorithm introduced in [12] which only requires a few fast matrix multiplication and a QR decomposition of n ⇥kpc matrix. The details for finding U1 are illustrated in the supplementary materials. Let Yr = Y −U1U > 1 Y be the residual of Y after projecting onto U1. For the second term, we compute it by solving the optimization problem min βr2Rp{kXβr −Yrk2} with Gradient Descent (GD) which is also described in detail in the supplementary materials. A detailed description of LING are presented in Algorithm 2. In the above discussion Y is a column vector. It is straightforward to generalize LING to fit into Algorithm 1 where Y have multiple columns by applying Algorithm 2 to every column of Y . In the following discussions, we use LING (Y, X, kpc, t2) to denote the LING output with corresponding inputs which is an approximation to X(X>X)−1X>Y . The following theorem gives error bound of LING . Theorem 2. Use λi to denote the ith singular value of X. Consider the LS problem min β2Rp{kXβ −Y k2} 4 Algorithm 3 L-CCA Input : X 2 n ⇥p1 ,Y 2 n ⇥p2: Data matrices. kcca: Number of top canonical variables we want to extract. t1: Number of orthogonal iterations. kpc: Number of top singular vectors for LING t2: Number of GD iterations for LING Output : Xkcca 2 n ⇥kcca, Ykcca 2 n ⇥kcca: Top kcca canonical variables of X and Y. 1.Generate a p1 ⇥kcca dimensional random matrix G with i.i.d standard normal entries. 2.Let X0 = XG, ˆX0 = QR(X0) 3. for t = 1 to t1 do Yt = LING( ˆXt−1, Y, kpc, t2), ˆYt = QR(Yt) Xt = LING( ˆYt, X, kpc, t2), ˆXt = QR(Xt) end for 4.Xkcca = ˆXt1, Ykcca = ˆYt1 for X 2 n⇥p and Y 2 n⇥1. Let Y ⇤= X(X>X)−1X>Y be the projection of Y onto the column space of X and ˆYt2 = LING (Y, X, kpc, t2). Then kY ⇤−ˆYt2k2 Cr2t2 (1) for some constant C > 0 and r = λ2 kpc+1−λ2 p λ2 kpc+1+λ2p < 1 The proof is in the supplementary materials due to space limitation. Remark 1. Theorem 2 gives some intuition of why LING decompose the projection into two components. In an extreme case if we set kpc = 0 (i.e. don’t remove projection on the top principle components and directly apply GD to the LS problem), r in equation 1 becomes λ2 1−λ2 p λ2 1+λ2p . Usually λ1 is much larger than λp, so r is very close to 1 which makes the error decays slowly. Removing projections on kpc top singular vector will accelerate error decay by making r smaller. The benefit of this trick is easily seen in the experiment section. 4.2 Fast Algorithm for CCA Our fast CCA algorithm L-CCA are summarized in Algorithm 3: There are two main differences between Algorithm 1 and 3. We use LING to solve Least squares approximately for the sake of speed. We also apply QR decomposition on every LING output for numerical stability issues mentioned in [21]. 4.3 Error Analysis of L-CCA This section provides mathematical results on how well the output of L-CCA algorithm approximates the subspace spanned by the top kcca true canonical variables for finite t1 and t2. Note that the asymptotic convergence property of L-CCA when t1, t2 ! 1 has already been stated by theorem 1. First we need to define the distances between subspaces as introduced in section 2.6.3 of [10]: Definition 2. Assume the matrices are full rank. The distance between the column space of matrix W1 2 n ⇥k and Z1 2 n ⇥k is defined by dist(W1, Z1) = kHW1 −HZ1k2 where HW1 = W1(W> 1 W1)−1W> 1 , HZ1 = Z1(Z> 1 Z1)−1Z> 1 are projection matrices. Here the matrix norm is the spectrum norm. Easy to see dist(W1, Z1) = dist(W1R1, Z1R2) for any invertible k ⇥k matrix R1, R2. We continue to use the notation defined in section 2. Recall that XC −1 2 xx U1 gives the top kcca canonical variables from X. The following theorem bounds the distance between the truth XC −1 2 xx U1 and ˆXt1, the L-CCA output after finite iterations. 5 Theorem 3. The distance between subspaces spanned top kcca canonical variables of X and the subspace returned by L-CCA is bounded by dist( ˆXt1, XC −1 2 xx U1) C1 ✓dkcca+1 dkcca ◆2t1 + C2 d2 kcca d2 kcca −d2 kcca+1 r2t2 where C1, C2 are constants. 0 < r < 1 is introduced in theorem 2. t1 is the number of power iterations in L-CCA and t2 is the number of gradient iterations for solving every LS problem. The proof of theorem 3 is in the supplementary materials. 5 Experiments In this section we compare several fast algorithms for computing CCA on large datasets. First let’s introduce the algorithms we compared in the experiments. • RPCCA : Instead of running CCA directly on the high dimensional X Y, RPCCA computes CCA only between the top krpcca principle components (left singular vector) of X and Y where krpcca ⌧p1, p2. For large n, p1, p2, we use randomized algorithm introduced in [12] for computing the top principle components of X and Y (see supplementary material for details). The tuning parameter that controls the tradeoff between computational cost and accuracy is krpcca. When krpcca is small RPCCA is fast but fails to capture the correlation structure on the bottom principle components of X and Y. When krpcca grows larger the principle components captures more structure in X Y space but it takes longer to compute the top principle components. In the experiments we vary krpcca. • D-CCA : See section 3.1 for detailed descriptions. The advantage of D-CCA is it’s extremely fast. In the experiments we iterate 30 times (t1 = 30) to make sure D-CCA achieves convergence. As mentioned earlier, when Cxx and Cyy are far from diagonal D-CCA becomes inaccurate. • L-CCA : See Algorithm 3 for detailed description. We find that the accuracy of LING in every orthogonal iteration is crucial to finding directions with large correlation while a small t1 suffices. So in the experiments we fix t1 = 5 and vary t2. In both experiments we fix kpc = 100 so the top kpc singular vectors of X, Y and every LING iteration can be computed relatively fast. • G-CCA : A special case of Algorithm 3 where kpc is set to 0. I.e. the LS projection in every iteration is computed directly by GD. G-CCA does not need to compute top singular vectors of X and Y as L-CCA . But by equation 1 and remark 1 GD takes more iterations to converge compared with LING . Comparing G-CCA and L-CCA in the experiments illustrates the benefit of removing the top singular vectors in LING and how this can affect the performance of the CCA algorithm. Same as L-CCA we fix the number of orthogonal iterations t1 to be 5 and vary t2, the number of gradient iterations for solving LS. RPCCA , L-CCA , G-CCA are all "asymptotically correct" algorithms in the sense that if we spend infinite CPU time all three algorithms will provide the exact CCA solution while D-CCA is extremely fast but relies on the assumption that X Y both have orthogonal columns. Intuitively, given a fixed CPU time, RPCCA dose an exact search on krpcca top principle components of X and Y. L-CCA does an exact search on the top kpc principle components (kpc < krpcca) and an crude search over the other directions. G-CCA dose a crude search over all the directions. The comparison is in fact testing which strategy is the most effective in finding large correlations over huge datasets. Remark 2. Both RPCCA and G-CCA can be regarded as special cases of L-CCA . When t1 is large and t2 is 0, L-CCA becomes RPCCA and when kpc is 0 L-CCA becomes G-CCA . In the following experiments we aims at extracting 20 most correlated directions from huge data matrices X and Y. The output of the above four algorithms are two n ⇥20 matrices Xkcca and Ykcca the columns of which contains the most correlated directions. Then a CCA is performed between Xkcca and Ykcca with matlab built-in CCA function. The canonical correlations between Xkcca and Ykcca indicates the amount of correlations captured from the the huge X Y spaces by above four 6 algorithms. In all the experiments, we vary krpcca for RPCCA and t2 for L-CCA and G-CCA to make sure these three algorithms spends almost the same CPU time ( D-CCA is alway fastest). The 20 canonical correlations between the subspaces returned by the four algorithms are plotted (larger means better). We want to make to additional comments here based on the reviewer’s feedback. First, for the two datasets considered in the experiments, classical CCA algorithms like the matlab built in function takes more than an hour while our algorithm is able to get an approximate answer in less than 10 minutes. Second, in the experiments we’ve been focusing on getting a good fit on the training datasets and the performance is evaluated by the magnitude of correlation captured in sample. To achieve better generalization performance a common trick is to perform regularized CCA [14] which easily fits into our frame work since it’s equivalent to running iterative ridge regression instead of OLS in Algorithm 1. Since our goal is to compute a fast and accurate fit, we don’t pursue the generalization performance here which is another statistical issue. 5.1 Penn Tree Bank Word Co-ocurrence CCA has already been successfully applied to building a low dimensional word embedding in [6, 7]. So the first task is a CCA between words and their context. The dataset used is the full Wall Street Journal Part of Penn Tree Bank which consists of 1.17 million tokens and a vocabulary size of 43k [18]. The rows of X matrix consists the indicator vectors of the current word and the rows of Y consists of indicators of the word after. To avoid sample sparsity for Y we only consider 3000 most frequent words, i.e. we only consider the tokens followed by 3000 most frequent words which is about 1 million. So X is of size 1000k ⇥43k and Y is of size 1000k ⇥3k where both X and Y are very sparse. Note that every row of X and Y only has a single 1 since they are indicators of words. So in this case Cxx, Cyy are diagonal and D-CCA can compute a very accurate CCA in less than a minute as mentioned in section 3.1. On the other hand, even though this dataset can be solved efficiently by D-CCA , it is interesting to look at the behavior of other three algorithms which do not make use of the special structure of this problem and compare them with D-CCA which can be regarded as the truth in this particular case. For RPCCA L-CCA G-CCA we try three different parameter set ups shown in table 1 and the 20 correlations are shown in figure 1. Among the three algorithms L-CCA performs best and gets pretty close to D-CCA as CPU time increases. RPCCA doesn’t perform well since a lot correlation structure of word concurrence exist in low frequency words which can’t be captured in the top principle components of X Y. Since the most frequent word occurs 60k times and the least frequent words occurs only once, the spectral of X drops quickly which makes GD converges very slowly. So G-CCA doesn’t perform well either. Table 1: Parameter Setup for Two Real Datasets PTB word co-occurrence URL features id krpcca t2 t2 CPU id krpcca t2 t2 CPU RPCCA L-CCA G-CCA time RPCCA L-CCA G-CCA time 1 300 7 17 170 1 600 4 7 220 2 500 38 51 460 2 600 11 16 175 3 800 115 127 1180 3 600 13 17 130 5.2 URL Features The second dataset is the URL Reputation dataset from UCI machine learning repository. The dataset contains 2.4 million URLs each represented by 3.2 million features. For simplicity we only use first 400k URLs. 38% of the features are host based features like WHOIS info, IP prefix and 62% are lexical based features like Hostname and Primary domain. See [19] for detailed information about this dataset. Unfortunately the features are anonymous so we pick the first 35% features as our X and last 35% features as our Y. We remove the 64 continuous features and only use the Boolean features. We sort the features according to their frequency (each feature is a column of 0s and 1s, the column with most 1s are the most frequent feature). We run CCA on three different subsets of X and Y. In the first experiment we select the 20k most frequent features of X and Y respectively. 7 5 10 15 20 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Index Correlation PTB Word Occurrence CPU time: 170 secs L−CCA D−CCA RPCCA G−CCA 5 10 15 20 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Index Correlation PTB Word Occurrence CPU time: 460 secs L−CCA D−CCA RPCCA G−CCA 5 10 15 20 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Index Correlation PTB Word Occurrence CPU time: 1180 secs L−CCA D−CCA RPCCA G−CCA Figure 1: PTB word co-ocurrence: Canonical correlations of the 20 directions returned by four algorithms. x axis are the indices and y axis are the correlations. In the second experiment we select 20k most frequent features from X Y after removing the top 100 most frequent features of X and 200 most frequent features of Y. In the third experiment we remove top 200 most frequent features from X and top 400 most frequent features of Y. So we are doing CCA between two 400k ⇤20k data matrices in these experiments. In this dataset the features within X and Y has huge correlations, so Cxx and Cyy aren’t diagonal anymore. But we still run D-CCA since it’s extremely fast. The parameter set ups for the three subsets are shown in table 1 and the 20 correlations are shown in figure 2. For this dataset the fast D-CCA doesn’t capture largest correlation since the correlation within X and Y make Cxx, Cyy not diagonal. RPCCA has best performance in experiment 1 but not as good in 2, 3. On the other hand G-CCA has good performance in experiment 3 but performs poorly in 1, 2. The reason is as follows: In experiment 1 the data matrices are relatively dense since they includes some frequent features. So every gradient iteration in L-CCA and G-CCA is slow. Moreover, since there are some high frequency features and most features has very low frequency, the spectrum of the data matrices in experiment 1 are very steep which makes GD in every iteration of G-CCA converges very slowly. These lead to poor performance of G-CCA . In experiment 3 since the frequent features are removed data matrices becomes more sparse and has a flat spectrum which is in favor of G-CCA . L-CCA has stable and close to best performance despite those variations in the datasets. 5 10 15 20 0.8 0.82 0.84 0.86 0.88 0.9 0.92 0.94 0.96 0.98 1 Index Correlation URL1 CPU time: 220secs L−CCA D−CCA RPCCA G−CCA 5 10 15 20 0.8 0.82 0.84 0.86 0.88 0.9 0.92 0.94 0.96 0.98 1 Index Correlation URL2 CPU time: 175secs L−CCA D−CCA RPCCA G−CCA 5 10 15 20 0.8 0.82 0.84 0.86 0.88 0.9 0.92 0.94 0.96 0.98 1 Index Correlation URL3 CPU time: 130secs L−CCA D−CCA RPCCA G−CCA Figure 2: URL: Canonical correlations of the 20 directions returned by four algorithms. x axis are the indices and y axis are the correlations. 6 Conclusion and Future Work In this paper we introduce L-CCA , a fast CCA algorithm for huge sparse data matrices. We construct theoretical bound for the approximation error of L-CCA comparing with the true CCA solution and implement experiments on two real datasets in which L-CCA has favorable performance. On the other hand, there are many interesting fast LS algorithms with provable guarantees which can be plugged into the iterative LS formulation of CCA. Moreover, in the experiments we focus on how much correlation is captured by L-CCA for simplicity. It’s also interesting to use L-CCA for feature generation and evaluate it’s performance on specific learning tasks. 8 References [1] Marina A.Epelman. Rate of convergence of steepest descent algorithm. 2007. [2] Haim Avron, Christos Boutsidis, Sivan Toledo, and Anastasios Zouzias. Efficient dimensionality reduction for canonical correlation analysis. In ICML (1), pages 347–355, 2013. [3] Francis R. Bach and Michael I. Jordan. A probabilistic interpretation of canonical correlation analysis. Technical report, University of California, Berkeley, 2005. [4] Léon Bottou. Large-Scale Machine Learning with Stochastic Gradient Descent. In Yves Lechevallier and Gilbert Saporta, editors, Proceedings of the 19th International Conference on Computational Statistics (COMPSTAT’2010), pages 177–187, Paris, France, August 2010. Springer. [5] Paramveer Dhillon, Yichao Lu, Dean P. Foster, and Lyle Ungar. New subsampling algorithms for fast least squares regression. In Advances in Neural Information Processing Systems 26, pages 360–368. 2013. [6] Paramveer S. Dhillon, Dean Foster, and Lyle Ungar. Multi-view learning of word embeddings via cca. In Advances in Neural Information Processing Systems (NIPS), volume 24, 2011. [7] Paramveer S. Dhillon, Jordan Rodu, Dean P. Foster, and Lyle H. Ungar. Two step cca: A new spectral method for estimating vector models of words. In Proceedings of the 29th International Conference on Machine learning, ICML’12, 2012. [8] Petros Drineas, Michael W. Mahoney, S. Muthukrishnan, and Tamás Sarlós. Faster least squares approximation. CoRR, abs/0710.1435, 2007. [9] Dean P. Foster, Sham M. Kakade, and Tong Zhang. Multi-view dimensionality reduction via canonical correlation analysis. Technical report, 2008. [10] Gene H. Golub and Charles F. Van Loan. Matrix Computations (3rd Ed.). Johns Hopkins University Press, Baltimore, MD, USA, 1996. [11] Gene. H Golub and Hongyuan Zha. The canonical correlations of matrix pairs and their numerical computation. Technical report, Computer Science Department, Stanford University, 1992. [12] N. Halko, P. G. Martinsson, and J. A. Tropp. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM Rev., 53(2):217–288, May 2011. [13] Nathan Halko, Per-Gunnar Martinsson, Yoel Shkolnisky, and Mark Tygert. An algorithm for the principal component analysis of large data sets. SIAM J. Scientific Computing, 33(5):2580– 2594, 2011. [14] David R. Hardoon, Sandor Szedmak, Or Szedmak, and John Shawe-taylor. Canonical correlation analysis; an overview with application to learning methods. Technical report, 2007. [15] H Hotelling. Relations between two sets of variables. Biometrika, 28:312–377, 1936. [16] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. Advances in Neural Information Processing Systems (NIPS), 2013. [17] Sham M. Kakade and Dean P. Foster. Multi-view regression via canonical correlation analysis. In In Proc. of Conference on Learning Theory, 2007. [18] Michael Lamar, Yariv Maron, Mark Johnson, and Elie Bienenstock. SVD and Clustering for Unsupervised POS Tagging. In Proceedings of the ACL 2010 Conference Short Papers, pages 215–219, Uppsala, Sweden, 2010. Association for Computational Linguistics. [19] Justin Ma, Lawrence K. Saul, Stefan Savage, and Geoffrey M. Voelker. Identifying suspicious urls: An application of large-scale online learning. In In Proc. of the International Conference on Machine Learning (ICML), 2009. [20] Liang Sun, Shuiwang Ji, and Jieping Ye. A least squares formulation for canonical correlation analysis. In Proceedings of the 25th International Conference on Machine Learning, ICML ’08, pages 1024–1031, New York, NY, USA, 2008. ACM. [21] Lloyd N. Trefethen and David Bau. Numerical Linear Algebra. SIAM, 1997. 9
|
2014
|
126
|
5,210
|
Analysis of Variational Bayesian Latent Dirichlet Allocation: Weaker Sparsity than MAP Shinichi Nakajima Berlin Big Data Center, TU Berlin Berlin 10587 Germany nakajima@tu-berlin.de Issei Sato University of Tokyo Tokyo 113-0033 Japan sato@r.dl.itc.u-tokyo.ac.jp Masashi Sugiyama University of Tokyo Tokyo 113-0033, Japan sugi@k.u-tokyo.ac.jp Kazuho Watanabe Toyohashi University of Technology Aichi 441-8580 Japan wkazuho@cs.tut.ac.jp Hiroko Kobayashi Nikon Corporation Kanagawa 244-8533 Japan hiroko.kobayashi@nikon.com Abstract Latent Dirichlet allocation (LDA) is a popular generative model of various objects such as texts and images, where an object is expressed as a mixture of latent topics. In this paper, we theoretically investigate variational Bayesian (VB) learning in LDA. More specifically, we analytically derive the leading term of the VB free energy under an asymptotic setup, and show that there exist transition thresholds in Dirichlet hyperparameters around which the sparsity-inducing behavior drastically changes. Then we further theoretically reveal the notable phenomenon that VB tends to induce weaker sparsity than MAP in the LDA model, which is opposed to other models. We experimentally demonstrate the practical validity of our asymptotic theory on real-world Last.FM music data. 1 Introduction Latent Dirichlet allocation (LDA) [5] is a generative model successfully used in various applications such as text analysis [5], image analysis [15], genometrics [6, 4], human activity analysis [12], and collaborative filtering [14, 20]1. Given word occurrences of documents in a corpora, LDA expresses each document as a mixture of multinomial distributions, each of which is expected to capture a topic. The extracted topics provide bases in a low-dimensional feature space, in which each document is compactly represented. This topic expression was shown to be useful for solving various tasks including classification [15], retrieval [26], and recommendation [14]. Since rigorous Bayesian inference is computationally intractable in the LDA model, various approximation techniques such as variational Bayesian (VB) learning [3, 7] are used. Previous theoretical studies on VB learning revealed that VB tends to produce sparse solutions, e.g., in mixture models [24, 25, 13], hidden Markov models [11], Bayesian networks [23], and fully-observed matrix factorization [17]. Here, we mean by sparsity that VB exhibits the automatic relevance determination 1 For simplicity, we use the terminology in text analysis below. However, the range of application of our theory given in this paper is not limited to texts. 1 (ARD) effect [19], which automatically prunes irrelevant degrees of freedom under non-informative or weakly sparse prior. Therefore, it is naturally expected that VB-LDA also produces a sparse solution (in terms of topics). However, it is often observed that VB-LDA does not generally give sparse solutions. In this paper, we attempt to clarify this gap by theoretically investigating the sparsity-inducing mechanism of VB-LDA. More specifically, we first analytically derive the leading term of the VB free energy in some asymptotic limits, and show that there exist transition thresholds in Dirichlet hyperparameters around which the sparsity-inducing behavior changes drastically. We then analyze the behavior of MAP and its variants in a similar way, and show that the VB solution is less sparse than the MAP solution in the LDA model. This phenomenon is completely opposite to other models such as mixture models [24, 25, 13], hidden Markov models [11], Bayesian networks [23], and fully-observed matrix factorization [17], where VB tends to induce stronger sparsity than MAP. We numerically demonstrate the practical validity of our asymptotic theory using artificial and realworld Last.FM music data for collaborative filtering, and further discuss the peculiarity of the LDA model in terms of sparsity. The free energy of VB-LDA was previously analyzed in [16], which evaluated the advantage of collapsed VB [21] over the original VB learning. However, that work focused on the difference between VB and collapsed VB, and neither the absolute free energy nor the sparsity was investigated. The update rules of VB was compared with those of MAP [2]. However, that work is based on approximation, and rigorous analysis was not made. To the best of our knowledge, our paper is the first work that theoretically elucidates the sparsity-inducing mechanism of VB-LDA. 2 Formulation In this section, we introduce the latent Dirichlet allocation model and variational Bayesian learning. 2.1 Latent Dirichlet Allocation Suppose that we observe M documents, each of which consists of N (m) words. Each word is included in a vocabulary with size L. We assume that each word is associated with one of the H topics, which is not observed. We express the word occurrence by an L-dimensional indicator vector w, where one of the entries is equal to one and the others are equal to zero. Similarly, we express the topic occurrence as an H-dimensional indicator vector z. We define the following functions that give the item numbers chosen by w and z, respectively: ´l(w) = l if wl = 1 and wl′ = 0 for l′ ̸= l, ´h(z) = h if zh = 1 and zh′ = 0 for h′ ̸= h. In the latent Dirichlet allocation (LDA) model [5], the word occurrence w(n,m) of the n-th position in the m-th document is assumed to follow the multinomial distribution: p(w(n,m)|Θ, B) = !L l=1 " (BΘ⊤)l,m #w(n,m) l = (BΘ⊤)´l(w(n,m)),m, (1) where Θ ∈[0, 1]M×H and B ∈[0, 1]L×H are parameter matrices to be estimated. The rows of Θ and the columns of B are probability mass vectors that sum up to one. We denote a column vector of a matrix by a bold lowercase letter, and a row vector by a bold lowercase letter with a tilde, i.e., Θ = (θ1, . . . , θH) = ($θ1, . . . , $θM)⊤, B = (β1, . . . , βH) = %$β1, . . . , $βL &⊤. With this notation, $θm denotes the topic distribution of the m-th document, and βh denotes the word distribution of the h-th topic. Given the topic occurrence latent variable z(n,m), the complete likelihood is written as p(w(n,m), z(n,m)|Θ, B) = p(w(n,m)|z(n,m), B)p(z(n,m)|Θ), (2) where p(w(n,m)|z(n,m), B) = !L l=1 !H h=1(Bl,h)w(n,m) l z(n,m) h , p(z(n,m)|Θ) = !H h=1(Θm,h)z(n,m) h . We assume the Dirichlet prior on Θ and B: p(Θ|α) ∝!M m=1 !H h=1(Θm,h)α−1, p(B|η) ∝!H h=1 !L l=1(Bl,h)η−1, (3) 2 Outlined font Figure 1: Graphical model of LDA. where α and η are hyperparameters that control the prior sparsity. We can make α dependent on m and/or h, and η dependent on l and/or h, and they can be estimated from observation. However, we fix those hyperparameters as given constants for simplicity in our analysis below. Figure 1 shows the graphical model of LDA. 2.2 Variational Bayesian Learning The Bayes posterior of LDA is written as p(Θ, B, {z(n,m)}|{w(n,m)}, α, η) = p({w(n,m)},{z(n,m)}|Θ,B)p(Θ|α)p(B|η) p({w(n,m)}) , (4) where p({w(n,m)}) = ' p({w(n,m)}, {z(n,m)}|Θ, B)p(Θ|α)p(B|η)dΘdBd{z(n,m)} is intractable to compute and thus requires some approximation method. In this paper, we focus on the variational Bayesian (VB) approximation and investigate its behavior theoretically. In the VB approximation, we assume that our approximate posterior is factorized as q(Θ, B, {z(n,m)}) = q(Θ, B)q({z(n,m)}), (5) and minimize the free energy: F = ( log q(Θ,B,{z(n,m)}) p({w(n,m)},{z(n,m)}|Θ,B)p(Θ|α)p(B|η) ) q(Θ,B,{z(n,m)}) , (6) where ⟨·⟩p denotes the expectation over the distribution p. This amounts to finding the distribution that is closest to the Bayes posterior (4) under the constraint (5). Using the variational method, we can obtain the following stationary condition: q(Θ) ∝p(Θ|α) exp ( log p({w(n,m)}, {z(n,m)}|Θ, B) ) q(B)q({z(n,m)}) , (7) q(B) ∝p(B|η) exp ( log p({w(n,m)}, {z(n,m)}|Θ, B) ) q(Θ)q({z(n,m)}) , (8) q({z(n,m)}) ∝exp ( log p({w(n,m)}, {z(n,m)}|Θ, B) ) q(Θ)q(B) . (9) From this, we can confirm that {q($θm)} and {q(βh)} follow the Dirichlet distribution and {q(z(n,m))} follows the multinomial distribution: q(Θ) ∝!M m=1 !H h=1(Θm,h) ˘ Θm,h−1, q(B) ∝!H h=1 !L l=1(Bl,h) ˘ Bl,h−1, (10) q({z(n,m)}) = !M m=1 !N (m) n=1 !H h=1(*z(n,m) h )z(n,m) h , (11) where, for ψ(·) denoting the Digamma function, the variational parameters satisfy ˘Θm,h = α + +N (m) n=1 *z(n,m) h , ˘Bl,h = η + +M m=1 +N (m) n=1 w(n,m) l *z(n,m) h , (12) *z(n,m) h = exp ! Ψ( ˘ Θm,h)+"L l=1 w(n,m) l (Ψ( ˘ Bl,h)−Ψ( "L l′=1 ˘ Bl′,h)) # "H h′=1 exp ! Ψ( ˘ Θm,h′)+"L l=1 w(n,m) l (Ψ( ˘ Bl,h′)−Ψ( "L l′=1 ˘ Bl′,h′)) #. (13) 2.3 Partially Bayesian Learning and MAP Estimation We can partially apply VB learning by approximating the posterior of Θ or B by the delta function. This approach is called the partially Bayesian (PA) learning [18], whose behavior was analyzed 3 and compared with VB in fully-observed matrix factorization. We call it PBA learning if Θ is marginalized and B is point-estimated, and PBB learning if B is marginalized and Θ is pointestimated. Note that the original VB algorithm for LDA proposed by [5] corresponds to PBA in our terminology. We also analyze the behavior of MAP estimation, where both of Θ and B are point-estimated. This corresponds to the probabilistic latent semantic analysis (pLSA) model [10], if we assume the flat prior α = η = 1 [8]. 3 Theoretical Analysis In this section, we first give an explicit form of the free energy in the LDA model. We then investigate its asymptotic behavior for VB learning, and further conduct similar analyses to the PBA, PBB, and MAP methods. Finally, we discuss the sparsity-inducing mechanism of these learning methods, and the relation to previous theoretical studies. 3.1 Explicit Form of Free Energy We first express the free energy (6) as a function of the variational parameters ˘Θ and ˘B: F = R + Q, where (14) R = ( log q(Θ)q(B) p(Θ|α)p(B|η) ) q(Θ,B) = +M m=1 " log Γ ("H h=1 ˘ Θm,h) $H h=1 Γ ( ˘ Θm,h) Γ (α)H Γ (Hα) + +H h=1 " ˘Θm,h −α # " Ψ( ˘Θm,h) −Ψ(+H h′=1 ˘Θm,h′) ## + +H h=1 " log Γ ("L l=1 ˘ Bl,h) $L l=1 Γ ( ˘ Bl,h) Γ (η)L Γ (Lη) + +L l=1 " ˘Bl,h −η # " Ψ( ˘Bl,h) −Ψ(+L l′=1 ˘Bl′,h) ## , (15) Q = ( log q({z(n,m)}) p({w(n,m)},{z(n,m)}|Θ,B) ) q(Θ,B,{z(n,m)}) = −+M m=1 N (m) +L l=1 Vl,m log ,+H h=1 exp(Ψ( ˘ Θm,h)) exp(Ψ("H h′=1 ˘ Θm,h′)) exp(Ψ( ˘ Bl,h)) exp(Ψ("L l′=1 ˘ Bl′,h)) . (16) Here, V ∈RL×M is the empirical word distribution matrix with its entries given by Vl,m = 1 N(m) +N (m) n=1 w(n,m) l . Note that we have eliminated the variational parameters {*z(n,m)} for the topic occurrence latent variables by using the stationary condition (13). 3.2 Asymptotic Analysis of VB Solution Below, we investigate the leading term of the free energy in the asymptotic limit when N ≡ minm N (m) →∞. Unlike the previous analysis for latent variable models [24], we do not assume L, M ≪N, but 1 ≪L, M, N at this point. This amounts to considering the asymptotic limit when L, M, N →∞with a fixed mutual ratio, or equivalently, assuming L, M ∼O(N). Throughout the paper, H is set at H = min(L, M) (i.e., the matrix BΘ⊤can express any multinomial distribution). We assume that the word distribution matrix V is a sample from the multinomial distribution with the true parameter U ∗∈RL×M whose rank is H∗∼O(1), i.e., U ∗= B∗Θ∗⊤ where Θ∗∈RM×H∗and B∗∈RL×H∗.2 We assume that α, η ∼O(1). The stationary condition (12) leads to the following lemma (the proof is given in Appendix A): Lemma 1 Let *B *Θ ⊤= ⟨BΘ⊤⟩q(Θ,B). Then, it holds that ⟨(BΘ⊤−*B *Θ ⊤)2 l,m⟩q(Θ,B) = Op(N −2), (17) Q = −+M m=1 N (m) +L l=1 Vl,m log( *B *Θ ⊤)l,m + Op(M), (18) where Op(·) denotes the order in probability. 2 More precisely, U ∗= B∗Θ∗⊤+ O(N −1) is sufficient. 4 Eq.(17) implies the convergence of the posterior. Let *J = +L l=1 +M m=1 κ " ( *B *Θ ⊤)l,m ̸= (B∗Θ∗⊤)l,m + Op(N −1) # (19) be the number of entries of *B *Θ ⊤that does not converge to the true value. Here, we denote by κ(·) the indicator function equal to one if the event is true, and zero otherwise. Then, Eq.(18) leads to the following lemma: Lemma 2 Q is minimized when *B *Θ ⊤= B∗Θ∗⊤+ Op(N −1), and it holds that Q = S + Op( *JN + M), where S = −log p({w(n,m)}, {z(n,m)}|Θ∗, B∗) = −+M m=1 N (m) +L l=1 Vl,m log(B∗Θ∗)l,m. Lemma 2 simply states that Q/N converges to the normalized entropy S/N of the true distribution (which is the lowest achievable value with probability 1), if and only if VB converges to the true distribution (i.e., *J = 0). Let *H = +H h=1 κ( 1 M +M m=1 *Θm,h ∼Op(1)) be the number of topics used in the whole corpus, . M (h) = +M m=1 κ( *Θm,h ∼Op(1)) be the number of documents that contain the h-th topic, and *L(h) = +L l=1 κ( *Bl,h ∼Op(1)) be the number of words of which the h-th topic consist. We have the following lemma (the proof is given in Appendix B): Lemma 3 R is written as follows: R = / M % Hα −1 2 & + *H % Lη −1 2 & −+ % H h=1 " . M (h) % α −1 2 & + *L(h) % η −1 2 � log N + (H −*H) % Lη −1 2 & log L + Op(H(M + L)). (20) Since we assumed that the true matrices Θ∗and B∗are of the rank of H∗, *H = H∗∼O(1) is sufficient for the VB posterior to converge to the true distribution. However, *H can be much larger than H∗with ⟨BΘ⊤⟩q(Θ,B) unchanged because of the non-identifiability of matrix factorization— duplicating topics with divided weights, for example, does not change the distribution. Based on Lemma 2 and Lemma 3, we obtain the following theorem (the proof is given in Appendix C): Theorem 1 In the limit when N →∞with L, M ∼O(1), it holds that *J = 0 with probability 1, and F = S + / M % Hα −1 2 & + *H % Lη −1 2 & −+ % H h=1 " . M (h) % α −1 2 & + *L(h) % η −1 2 � log N + Op(1). In the limit when N, M →∞with M N , L ∼O(1), it holds that *J = op(log N), and F = S + / M % Hα −1 2 & −+ % H h=1 . M (h) % α −1 2 &0 log N + op(N log N). In the limit when N, L →∞with L N , M ∼O(1), it holds that *J = op(log N), and F = S + HLη log N + op(N log N). In the limit when N, L, M →∞with L N , M N ∼O(1), it holds that *J = op(N log N), and F = S + H(Mα + Lη) log N + op(N 2 log N). Since Eq.(17) was shown to hold, the predictive distribution converges to the true distribution if *J = 0. Accordingly, Theorem 1 states that the consistency holds in the limit when N →∞with L, M ∼O(1). Theorem 1 also implies that, in the asymptotic limits with small L ∼O(1), the leading term depends on *H, meaning that it dominates the topic sparsity of the VB solution. We have the following corollary (the proof is given in Appendix D): 5 Table 1: Sparsity thresholds of VB, PBA, PBB, and MAP methods (see Theorem 2). The first four columns show the thresholds (αsparse, αdense), of which the function forms depend on the range of η, in the limit when N →∞with L, M ∼O(1). A single value is shown if αsparse = αdense. The last column shows the threshold αM→∞in the limit when N, M →∞with M N , L ∼O(1). ! αsparse, αdense " αM→∞ η range 0 < η ≤ 1 2L 1 2L < η ≤1 2 1 2 < η < 1 1 ≤η < ∞ 0 < η < ∞ VB 1 2 − 1 2 −Lη minh M∗(h) 1 2 + Lη−1 2 maxh M∗(h) # 1 2 + L−1 2 maxh M∗(h) , 1 2 + Lη−1 2 minh M∗(h) $ 1 2 PBA — # 1 2, 1 2 + L(η−1) minh M∗(h) $ 1 2 PBB 1 1 + Lη−1 2 maxh M∗(h) # 1 + L−1 2 maxh M∗(h) , 1 + Lη−1 2 minh M∗(h) $ 1 MAP — # 1, 1 + L(η−1) minh M∗(h) $ 1 Corollary 1 Let M ∗(h) = +M m=1 κ(Θ∗ m,h ∼O(1)) and L∗(h) = +L l=1 κ(B∗ l,h ∼O(1)). Consider the limit when N →∞with L, M ∼O(1). When 0 < η ≤ 1 2L, the VB solution is sparse if α < 1 2 − 1 2 −Lη minh M ∗(h) , and dense if α > 1 2 − 1 2 −Lη minh M∗(h) . When 1 2L < η ≤1 2, the VB solution is sparse if α < 1 2 + Lη−1 2 maxh M∗(h) , and dense if α > 1 2 + Lη−1 2 maxh M∗(h) . When η > 1 2, the VB solution is sparse if α < 1 2 + L−1 2 maxh M∗(h) , and dense if α > 1 2 + Lη−1 2 minh M∗(h) . In the limit when N, M →∞ with M N , L ∼O(1), the VB solution is sparse if α < 1 2, and dense if α > 1 2. In the case when L, M ≪N and in the case when L ≪M, N, Corollary 1 provides information on the sparsity of the VB solution, which will be compared with other methods in Section 3.3. On the other hand, although we have successfully derived the leading term of the free energy also in the case when M ≪L, N and in the case when 1 ≪L, M, N, it unfortunately provides no information on sparsity of the solution. 3.3 Asymptotic Analysis of PBA, PBB, and MAP By applying similar analysis to PBA learning, PBB learning, and MAP estimation, we can obtain the following theorem (the proof is given in Appendix E): Theorem 2 In the limit when N →∞with L, M ∼O(1), the solution is sparse if α < αsparse, and dense if α > αdense. In the limit when N, M →∞with M N , L ∼O(1), the solution is sparse if α < αM→∞, and dense if α > αM→∞. Here, αsparse, αdense, and αM→∞are given in Table 1. A notable finding from Table 1 is that the threshold that determines the topic sparsity of PBB-LDA is (most of the case exactly) 1 2 larger than the threshold of VB-LDA. The same relation is observed between MAP-LDA and PBA-LDA. From these, we can conclude that point-estimating Θ, instead of integrating it out, increases the threshold by 1 2 in the LDA model. We will validate this observation by numerical experiments in Section 4. 3.4 Discussion The above theoretical analysis (Thereom 2) showed that VB tends to induce weaker sparsity than MAP in the LDA model3, i.e., VB requires sparser prior (smaller α) than MAP to give a sparse solution (mean of the posterior). This phenomenon is completely opposite to other models such as mixture models [24, 25, 13], hidden Markov models [11], Bayesian networks [23], and fullyobserved matrix factorization [17], where VB tends to induce stronger sparsity than MAP. This phenomenon might be partly explained as follows: In the case of mixture models, the sparsity threshold depends on the degree of freedom of a single component [24]. This is reasonable because 3 Although this tendency was previously pointed out [2] by using the approximation exp(ψ(n)) ≈n −1 2 and comparing the stationary condition, our result has first clarified the sparsity behavior of the solution based on the asymptotic free energy analysis without using such an approximation. 6 α η 0 0.5 1 0 0.2 0.4 0.6 0.8 1 1.2 0 10 20 30 40 50 60 70 80 90 100 (a) VB α η 0 0.5 1 0 0.2 0.4 0.6 0.8 1 1.2 0 10 20 30 40 50 60 70 80 90 100 (b) PBA α η 0 0.5 1 0 0.2 0.4 0.6 0.8 1 1.2 0 10 20 30 40 50 60 70 80 90 100 (c) PBB α η 0 0.5 1 0 0.2 0.4 0.6 0.8 1 1.2 0 10 20 30 40 50 60 70 80 90 100 (d) MAP Figure 2: Estimated number *H of topics by (a) VB, (b) PBA, (c) PBB, and (d) MAP, for the artificial data with L = 100, M = 100, H∗= 20, and N ∼10000. α η 0 0.5 1 0 0.2 0.4 0.6 0.8 1 1.2 0 10 20 30 40 50 60 70 80 90 100 (a) VB α η 0 0.5 1 0 0.2 0.4 0.6 0.8 1 1.2 0 10 20 30 40 50 60 70 80 90 100 (b) PBA α η 0 0.5 1 0 0.2 0.4 0.6 0.8 1 1.2 0 10 20 30 40 50 60 70 80 90 100 (c) PBB α η 0 0.5 1 0 0.2 0.4 0.6 0.8 1 1.2 0 10 20 30 40 50 60 70 80 90 100 (d) MAP Figure 3: Estimated number *H of topics for the Last.FM data with L = 100, M = 100, and N ∼700. adding a single component increases the model complexity by this amount. Also, in the case of LDA, adding a single topic requires additional L + 1 parameters. However, the added topic is shared over M documents, which could discount the increased model complexity relative to the increased data fidelity. Corollary 1, which implies the dependency of the threshold for α on L and M, might support this conjecture. However, the same applies to the matrix factorization, where VB was shown to give a sparser solution than MAP [17]. Investigation on related models, e.g., Poisson MF [9], would help us fully explain this phenomenon. Technically, our theoretical analysis is based on the previous asymptotic studies on VB learning conducted for latent variable models [24, 25, 13, 11, 23]. However, our analysis is not just a straightforward extension of those works to the LDA model. For example, the previous analysis either implicitly [24] or explicitly [13] assumed the consistency of VB learning, while we also analyzed the consistency of VB-LDA, and showed that the consistency does not always hold (see Theorem 1). Moreover, we derived a general form of the asymptotic free energy, which can be applied to different asymptotic limits. Specifically, the standard asymptotic theory requires a large number N of words per document, compared to the number M of documents and the vocabulary size L. This may be reasonable in some collaborative filtering data such as the Last.FM data used in our experiments in Section 4. However, L and/or M would be comparable to or larger than N in standard text analysis. Our general form of the asymptotic free energy also allowed us to elucidate the behavior of the VB free energy when L and/or M diverges with the same order as N. This attempt successfully revealed the sparsity of the solution for the case when M diverges while L ∼O(1). However, when L diverges, we found that the leading term of the free energy does not contain interesting insight into the sparsity of the solution. Higher-order asymptotic analysis will be necessary to further understand the sparsity-inducing mechanism of the LDA model with large vocabulary. 4 Numerical Illustration In this section, we conduct numerical experiments on artificial and real data for collaborative filtering. The artificial data were created as follows. We first sample the true document matrix Θ∗of size M × H∗and the true topic matrix B∗of size L × H∗. We assume that each row $θ ∗ m of Θ∗follows the Dirichlet distribution with α∗= 1/H∗, while each column β∗ h of B∗follows the Dirichlet distribution with η∗= 1/L. The document length N (m) is sampled from the Poisson distribution with its mean N. The word histogram N (m)vm for each document is sampled from the multinomial 7 α η 0 0.5 1 0 0.2 0.4 0.6 0.8 1 1.2 0 10 20 30 40 50 60 70 80 90 100 (a) L = 100, M = 100 α η 0 0.5 1 0 0.2 0.4 0.6 0.8 1 1.2 0 10 20 30 40 50 60 70 80 90 100 (b) L = 100, M = 1000 α η 0 0.5 1 0 0.2 0.4 0.6 0.8 1 1.2 0 10 20 30 40 50 60 70 80 90 100 (c) L = 500, M = 100 α η 0 0.5 1 0 0.2 0.4 0.6 0.8 1 1.2 0 10 20 30 40 50 60 70 80 90 100 (d) L = 500, M = 1000 Figure 4: Estimated number *H of topics by VB-LDA for the artificial data with H∗= 20 and N ∼10000. For the case when L = 500, M = 1000, the maximum estimated rank is limited to 100 for computational reason. distribution with the parameter specified by the m-th row vector of B∗Θ∗⊤. Thus, we obtain the L × M matrix V , which corresponds to the empirical word distribution over M documents. As a real-world dataset, we used the Last.FM dataset.4 Last.FM is a well-known social music web site, and the dataset includes the triple (“user,” “artist,” “Freq”) which was collected from the playlists of users in the community by using a plug-in in users’ media players. This triple means that “user” played “artist” music “Freq” times, which indicates users’ preferred artists. A user and a played artist are analogous to a document and a word, respectively. We randomly chose L artists from the top 1000 frequent artists, and M users who live in the United States. To find a better local solution (which hopefully is close to the global solution), we adopted a split and merge strategy [22], and chose the local solution giving the lowest free energy among different initialization schemes. Figure 2 shows the estimated number *H of topics by different approximation methods, i.e., VB, PBA, PBB, and MAP, for the Artificial data with L = 100, M = 100, H∗= 20, and N ∼10000. We can clearly see that the sparsity threshold in PBB and MAP, where Θ is point-estimated, is larger than that in VB and PBA, where Θ is marginalized. This result supports the statement by Theorem 2. Figure 3 shows results on the Last.FM data with L = 100, M = 100 and N ∼700. We see a similar tendency to Figure 2 except the region where η < 1 for PBA, in which our theory does not predict the estimated number of topics. Finally, we investigate how different asymptotic settings affect the topic sparsity. Figure 4 shows the sparsity dependence on L and M for the artificial data. The graphs correspond to the four cases mentioned in Theorem 1, i.e, (a) L, M ≪N, (b) L ≪N, M, (c) M ≪N, L, and (d) 1 ≪N, L, M. Corollary 1 explains the behavior in (a) and (b), and further analysis is required to explain the behavior in (c) and (d). 5 Conclusion In this paper, we considered variational Bayesian (VB) learning in the latent Dirichlet allocation (LDA) model and analytically derived the leading term of the asymptotic free energy. When the vocabulary size is small, our result theoretically explains the phase-transition phenomenon. On the other hand, when vocabulary size is as large as the number of words per document, the leading term tells nothing about sparsity. We need more accurate analysis to clarify the sparsity in such cases. Throughout the paper, we assumed that the hyperparameters α and η are pre-fixed. However, α would often be estimated for each topic h, which is one of the advantages of using the LDA model in practice [5]. In the future work, we will extend the current line of analysis to the empirical Bayesian setting where the hyperparameters are also learned, and further elucidate the behavior of the LDA model. Acknowledgments The authors thank the reviewers for helpful comments. Shinichi Nakajima thanks the support from Nikon Corporation, MEXT Kakenhi 23120004, and the Berlin Big Data Center project (FKZ 01IS14013A). Masashi Sugiyama thanks the support from the JST CREST program. Kazuho Watanabe thanks the support from JSPS Kakenhi 23700175 and 25120014. 4http://mtg.upf.edu/node/1671 8 References [1] H. Alzer. On some inequalities for the Gamma and Psi functions. Mathematics of Computation, 66(217):373–389, 1997. [2] A. Asuncion, M. Welling, P. Smyth, and Y. W. Teh. On smoothing and inference for topic models. In Proc. of UAI, pages 27–34, 2009. [3] H. Attias. Inferring parameters and structure of latent variable models by variational Bayes. In Proc. of UAI, pages 21–30, 1999. [4] M. Bicego, P. Lovato, A. Ferrarini, and M. Delledonne. Biclustering of expression microarray data with topic models. In Proc. of ICPR, pages 2728–2731, 2010. [5] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993–1022, 2003. [6] X. Chen, X. Hu, X. Shen, and G. Rosen. Probabilistic topic modeling for genomic data interpretation. In 2010 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pages 149–152, 2010. [7] Z. Ghahramani and M. J. Beal. Graphical models and variational methods. In Advanced Mean Field Methods, pages 161–177. MIT Press, 2001. [8] M. Girolami and A. Kaban. On an equivalence between PLSI and LDA. In Proc. of SIGIR, pages 433– 434, 2003. [9] P. Gopalan, J. M. Hofman, and D. M. Blei. Scalable recommendation with Poisson factorization. arXiv:1311.1704 [cs.IR], 2013. [10] T. Hofmann. Unsupervised learning by probabilistic latent semantic analysis. Machine Learning, 42:177– 196, 2001. [11] T. Hosino, K. Watanabe, and S. Watanabe. Stochastic complexity of hidden markov models on the variational Bayesian learning. IEICE Trans. on Information and Systems, J89-D(6):1279–1287, 2006. [12] T. Huynh, Mario F., and B. Schiele. Discovery of activity patterns using topic models. In International Conference on Ubiquitous Computing (UbiComp), 2008. [13] D. Kaji, K. Watanabe, and S. Watanabe. Phase transition of variational Bayes learning in Bernoulli mixture. Australian Journal of Intelligent Information Processing Systems, 11(4):35–40, 2010. [14] R. Krestel, P. Fankhauser, and W. Nejdl. Latent dirichlet allocation for tag recommendation. In Proceedings of the Third ACM Conference on Recommender Systems, pages 61–68, 2009. [15] F.-F. Li and P. Perona. A bayesian hierarchical model for learning natural scene categories. In Proc. of CVPR, pages 524–531, 2005. [16] I. Mukherjee and D. M. Blei. Relative performance guarantees for approximate inference in latent Dirichlet allocation. In Advances in NIPS, 2008. [17] S. Nakajima and M. Sugiyama. Theoretical analysis of Bayesian matrix factorization. Journal of Machine Learning Research, 12:2579–2644, 2011. [18] S. Nakajima, M. Sugiyama, and S. D. Babacan. On Bayesian PCA: Automatic dimensionality selection and analytic solution. In Proc. of ICML, pages 497–504, 2011. [19] R. M. Neal. Bayesian Learning for Neural Networks. Springer, 1996. [20] S. Purushotham, Y. Liu, and C. C. J. Kuo. Collaborative topic regression with social matrix factorization for recommendation systems. In Proc. of ICML, 2012. [21] Y. W. Teh, D. Newman, and M. Welling. A collapsed variational Bayesian inference algorithm for latent Dirichlet allocation. In Advances in NIPS, 2007. [22] N. Ueda, R. Nakano, Z. Ghahramani, and G. E. Hinton. SMEM algorithm for mixture models. Neural Computation, 12(9):2109–2128, 2000. [23] K. Watanabe, M. Shiga, and S. Watanabe. Upper bound for variational free energy of Bayesian networks. Machine Learning, 75(2):199–215, 2009. [24] K. Watanabe and S. Watanabe. Stochastic complexities of Gaussian mixtures in variational Bayesian approximation. Journal of Machine Learning Research, 7:625–644, 2006. [25] K. Watanabe and S. Watanabe. Stochastic complexities of general mixture models in variational Bayesian learning. Neural Networks, 20(2):210–219, 2007. [26] X. Wei and W. B. Croft. LDA-based document models for ad-hoc retrieval. In Prof. of SIGIR, pages 178–185, 2006. 9
|
2014
|
127
|
5,211
|
Iterative Neural Autoregressive Distribution Estimator (NADE-k) Tapani Raiko Aalto University Li Yao Universit´e de Montr´eal KyungHyun Cho Universit´e de Montr´eal Yoshua Bengio Universit´e de Montr´eal, CIFAR Senior Fellow Abstract Training of the neural autoregressive density estimator (NADE) can be viewed as doing one step of probabilistic inference on missing values in data. We propose a new model that extends this inference scheme to multiple steps, arguing that it is easier to learn to improve a reconstruction in k steps rather than to learn to reconstruct in a single inference step. The proposed model is an unsupervised building block for deep learning that combines the desirable properties of NADE and multi-prediction training: (1) Its test likelihood can be computed analytically, (2) it is easy to generate independent samples from it, and (3) it uses an inference engine that is a superset of variational inference for Boltzmann machines. The proposed NADE-k is competitive with the state-of-the-art in density estimation on the two datasets tested. 1 Introduction Traditional building blocks for deep learning have some unsatisfactory properties. Boltzmann machines are, for instance, difficult to train due to the intractability of computing the statistics of the model distribution, which leads to the potentially high-variance MCMC estimators during training (if there are many well-separated modes (Bengio et al., 2013)) and the computationally intractable objective function. Autoencoders have a simpler objective function (e.g., denoising reconstruction error (Vincent et al., 2010)), which can be used for model selection but not for the important choice of the corruption function. On the other hand, this paper follows up on the Neural Autoregressive Distribution Estimator (NADE, Larochelle and Murray, 2011), which specializes previous neural auto-regressive density estimators (Bengio and Bengio, 2000) and was recently extended (Uria et al., 2014) to deeper architectures. It is appealing because both the training criterion (just log-likelihood) and its gradient can be computed tractably and used for model selection, and the model can be trained by stochastic gradient descent with backpropagation. However, it has been observed that the performance of NADE has still room for improvement. The idea of using missing value imputation as a training criterion has appeared in three recent papers. This approach can be seen either as training an energy-based model to impute missing values well (Brakel et al., 2013), as training a generative probabilistic model to maximize a generalized pseudo-log-likelihood (Goodfellow et al., 2013), or as training a denoising autoencoder with a masking corruption function (Uria et al., 2014). Recent work on generative stochastic networks (GSNs), which include denoising auto-encoders as special cases, justifies dependency networks (Heckerman et al., 2000) as well as generalized pseudo-log-likelihood (Goodfellow et al., 2013), but have the disadvantage that sampling from the trained “stochastic fill-in” model requires a Markov chain (repeatedly resampling some subset of the values given the others). In all these cases, learning progresses by back-propagating the imputation (reconstruction) error through inference steps of the model. This allows the model to better cope with a potentially imperfect inference algorithm. This learning-to-cope was introduced recently in 2011 by Stoyanov et al. (2011) and Domke (2011). 1 v <0> v <1> v <2> h <1> h <1> W V W V v <0> v <1> h <1> h <1> [1] [2] U W V h <2> h <2> [1] [2] U W V v <2> v <0> v <1> v <2> h <1> W W W W v <3> W W T T T V V V V T T h <2> h <3> h <1> h <2> [1] [1] [1] [2] [2] Figure 1: The choice of a structure for NADE-k is very flexible. The dark filled halves indicate that a part of the input is observed and fixed to the observed values during the iterations. Left: Basic structure corresponding to Equations (6–7) with n = 2 and k = 2. Middle: Depth added as in NADE by Uria et al. (2014) with n = 3 and k = 2. Right: Depth added as in Multi-Prediction Deep Boltzmann Machine by Goodfellow et al. (2013) with n = 2 and k = 3. The first two structures are used in the experiments. The NADE model involves an ordering over the components of the data vector. The core of the model is the reconstruction of the next component given all the previous ones. In this paper we reinterpret the reconstruction procedure as a single iteration in a variational inference algorithm, and we propose a version where we use k iterations instead, inspired by (Goodfellow et al., 2013; Brakel et al., 2013). We evaluate the proposed model on two datasets and show that it outperforms the original NADE (Larochelle and Murray, 2011) as well as NADE trained with the order-agnostic training algorithm (Uria et al., 2014). 2 Proposed Method: NADE-k We propose a probabilistic model called NADE-k for D-dimensional binary data vectors x. We start by defining pθ for imputing missing values using a fully factorial conditional distribution: pθ(xmis | xobs) = Y i∈mis pθ(xi | xobs), (1) where the subscripts mis and obs denote missing and observed components of x. From the conditional distribution pθ we compute the joint probability distribution over x given an ordering o (a permutation of the integers from 1 to D) by pθ(x | o) = D Y d=1 pθ(xod | xo<d), (2) where o<d stands for indices o1 . . . od−1. The model is trained to minimize the negative log-likelihood averaged over all possible orderings o L(θ) = Eo∈D! [Ex∈data [−log pθ(x | o)]] . (3) using an unbiased, stochastic estimator of L(θ) ˆL(θ) = − D D −d + 1 log pθ(xo≥d | xo<d) (4) by drawing o uniformly from all D! possible orderings and d uniformly from 1 . . . D (Uria et al., 2014). Note that while the model definition in Eq. (2) is sequential in nature, the training criterion (4) involves reconstruction of all the missing values in parallel. In this way, training does not involve picking or following specific orders of indices. In this paper, we define the conditional model pθ(xmis | xobs) using a deep feedforward neural network with nk layers, where we use n weight matrices k times. This can also be interpreted as running k successive inference steps with an n-layer neural network. The input to the network is v⟨0⟩= m ⊙Ex∈data [x] + (1 −m) ⊙x (5) where m is a binary mask vector indicating missing components with 1, and ⊙is an elementwise multiplication. Ex∈data [x] is an empirical mean of the observations. For simplicity, we give 2 Figure 2: The inner working mechanism of NADE-k. The left most column shows the data vectors x, the second column shows their masked version and the subsequent columns show the reconstructions v⟨0⟩. . . v⟨10⟩(See Eq. (7)). equations for a simple structure with n = 2. See Fig. 1 (left) for the illustration of this simple structure. In this case, the activations of the layers at the t-th step are h⟨t⟩= φ(Wv⟨t−1⟩+ c) (6) v⟨t⟩= m ⊙σ(Vh⟨t⟩+ b) + (1 −m) ⊙x (7) where φ is an element-wise nonlinearity, σ is a logistic sigmoid function, and the iteration index t runs from 1 to k. The conditional probabilities of the variables (see Eq. (1)) are read from the output v⟨k⟩as pθ(xi = 1 | xobs) = v⟨k⟩ i . (8) Fig. 2 shows examples of how v⟨t⟩evolves over iterations, with the trained model. The parameters θ = {W, V, c, b} can be learned by stochastic gradient descent to minimize −L(θ) in Eq. (3), or its stochastic approximation −ˆL(θ) in Eq. (4), with the stochastic gradient computed by back-propagation. Once the parameters θ are learned, we can define a mixture model by using a uniform probability over a set of orderings O. We can compute the probability of a given vector x as a mixture model pmixt(x | θ, O) = 1 |O| X o∈O pθ(x | o) (9) with Eq. (2). We can draw independent samples from the mixture by first drawing an ordering o and then sequentially drawing each variable using xod ∼pθ(xod | xo<d). Furthermore, we can draw samples from the conditional p(xmis | xobs) easily by considering only orderings where the observed indices appear before the missing ones. Pretraining It is well known that training deep networks is difficult without pretraining, and in our experiments, we train networks up to kn = 7×3 = 21 layers. When pretraining, we train the model to produce good reconstructions v⟨t⟩at each step t = 1 . . . k. More formally, in the pretraining phase, we replace Equations (4) and (8) by ˆLpre(θ) = − D D −d + 1 1 k k X t=1 log Y i∈o≥d p⟨t⟩ θ (xi | xo<d) (10) p⟨t⟩ θ (xi = 1 | xobs) = v⟨t⟩ i . (11) 2.1 Related Methods and Approaches Order-agnostic NADE The proposed method follows closely the order-agnostic version of NADE (Uria et al., 2014), which may be considered as the special case of NADE-k with k = 1. On the other hand, NADE-k can be seen as a deep NADE with some specific weight sharing (matrices W and V are reused for different depths) and gating in the activations of some layers (See Equation (7)). 3 Additionally, Uria et al. (2014) found it crucial to give the mask m as an auxiliary input to the network, and initialized missing values to zero instead of the empirical mean (See Eq. (5)). Due to these differences, we call their approach NADE-mask. One should note that NADE-mask has more parameters due to using the mask as a separate input to the network, whereas NADE-k is roughly k times more expensive to compute. Probabilistic Inference Let us consider the task of missing value imputation in a probabilistic latent variable model. We get the conditional probability of interest by marginalizing out the latent variables from the posterior distribution: p(xmis | xobs) = Z h p(h, xmis | xobs)dh. (12) Accessing the joint distribution p(h, xmis | xobs) directly is often harder than alternatively updating h and xmis based on the conditional distributions p(h | xmis, xobs) and p(xmis | h).1 Variational inference is one of the representative examples that exploit this. In variational inference, a factorial distribution q(h, xmis) = q(h)q(xmis) is iteratively fitted to p(h, xmis | xobs) such that the KL-divergence between q and p KL[q(h, xmis)||p(h, xmis | xobs)] = − Z h,xmis q(h, xmis) log p(h, xmis | xobs) q(h, xmis) dhdxmis (13) is minimized. The algorithm alternates between updating q(h) and q(xmis), while considering the other one fixed. As an example, let us consider a restricted Boltzmann machine (RBM) defined by p(v, h) ∝exp(b⊤v + c⊤h + h⊤Wv). (14) We can fit an approximate posterior distribution parameterized as q(vi = 1) = ¯vi and q(hj = 1) = ¯hj to the true posterior distribution by iteratively computing ¯h ←σ(W¯v + c) (15) ¯v ←m ⊙σ(W⊤h + b) + (1 −m) ⊙v. (16) We notice the similarity to Eqs. (6)–(7): If we assume φ = σ and V = W⊤, the inference in the NADE-k is equivalent to performing k iterations of variational inference on an RBM for the missing values (Peterson and Anderson, 1987). We can also get variational inference on a deep Boltzmann machine (DBM) using the structure in Fig. 1 (right). Multi-Prediction Deep Boltzmann Machine Goodfellow et al. (2013) and Brakel et al. (2013) use backpropagation through variational inference steps to train a deep Boltzmann machine. This is very similar to our work, except that they approach the problem from the view of maximizing the generalized pseudo-likelihood (Huang and Ogata, 2002). Also, the deep Boltzmann machine lacks the tractable probabilistic interpretation similar to NADE-k (See Eq. (2)) that would allow to compute a probability or to generate independent samples without resorting to a Markov chain. Also, our approach is somewhat more flexible in the choice of model structures, as can be seen in Fig. 1. For instance, in the proposed NADE-k, encoding and decoding weights do not have to be shared and any type of nonlinear activations, other than a logistic sigmoid function, can be used. Product and Mixture of Experts One could ask what would happen if we would define an ensemble likelihood along the line of the training criterion in Eq. (3). That is, −log pprod(x | θ) ∝Eo∈D! [−log p(x | θ, o)] . (17) Maximizing this ensemble likelihood directly will correspond to training a product-of-experts model (Hinton, 2000). However, this requires us to evaluate the intractable normalization constant during training as well as in the inference, making the model not tractable anymore. On the other hand, we may consider using the log-probability of a sample under the mixture-ofexperts model as the training criterion −log pmixt(x | θ) = −log Eo∈D! [p(x | θ, o)] . (18) This criterion resembles clustering, where individual models may specialize in only a fraction of the data. In this case, however, the simple estimator such as in Eq. (4) would not be available. 1 We make a typical assumption that observations are mutually independent given the latent variables. 4 Model Log-Prob. Model Log-Prob. NADE 1HL(fixed order) -88.86 RBM (500h, CD-25) ≈-86.34 NADE 1HL -99.37 DBN (500h+2000h) ≈-84.55 NADE 2HL -95.33 DARN (500h) ≈-84.71 NADE-mask 1HL -92.17 DARN (500h, adaNoise) ≈-84.13 NADE-mask 2HL -89.17 NADE-5 1HL -90.02 NADE-mask 4HL -89.60 NADE-5 2HL -87.14 EoNADE-mask 1HL(128 Ords) -87.71 EoNADE-5 1HL(128 Ords) -86.23 EoNADE-mask 2HL(128 Ords) -85.10 EoNADE-5 2HL(128 Ords) -84.68 Table 1: Results obtained on MNIST using various models and number of hidden layers (1HL or 2HL). “Ords” is short for “orderings”. These are the average log-probabilities of the test set. EoNADE refers to the ensemble probability (See Eq. (9)). From here on, in all figures and tables we use “HL” to denote the number of hidden layers and “h” for the number of hidden units. 3 Experiments We study the proposed model with two datasets: binarized MNIST handwritten digits and Caltech 101 silhouettes. We train NADE-k with one or two hidden layers (n = 2 and n = 3, see Fig. 1, left and middle) with a hyperbolic tangent as the activation function φ(·). We use stochastic gradient descent on the training set with a minibatch size fixed to 100. We use AdaDelta (Zeiler, 2012) to adaptively choose a learning rate for each parameter update on-the-fly. We use the validation set for earlystopping and to select the hyperparameters. With the best model on the validation set, we report the log-probability computed on the test set. We have made our implementation available2. 3.1 MNIST We closely followed the procedure used by Uria et al. (2014), including the split of the dataset into 50,000 training samples, 10,000 validation samples and 10,000 test samples. We used the same version where the data has been binarized by sampling. We used a fixed width of 500 units per hidden layer. The number of steps k was selected among {1, 2, 4, 5, 7}. According to our preliminary experiments, we found that no separate regularization was needed when using a single hidden layer, but in case of two hidden layers, we used weight decay with the regularization constant in the interval e−5, e−2 . Each model was pretrained for 1000 epochs and fine-tuned for 1000 epochs in the case of one hidden layer and 2000 epochs in the case of two. For both NADE-k with one and two hidden layers, the validation performance was best with k = 5. The regularization constant was chosen to be 0.00122 for the two-hidden-layer model. Results We report in Table 1 the mean of the test log-probabilities averaged over randomly selected orderings. We also show the experimental results by others from (Uria et al., 2014; Gregor et al., 2014). We denote the model proposed in (Uria et al., 2014) as a NADE-mask. From Table 1, it is clear that NADE-k outperforms the corresponding NADE-mask both with the individual orderings and ensembles over orderings using both 1 or 2 hidden layers. NADE-k with two hidden layers achieved the generative performance comparable to that of the deep belief network (DBN) with two hidden layers. Fig. 3 shows training curves for some of the models. We can see that the NADE-1 does not perform as well as NADE-mask. This confirms that in the case of k = 1, the auxiliary mask input is indeed useful. Also, we can note that the performance of NADE-5 is still improving at the end of the preallocated 2000 epochs, further suggesting that it may be possible to obtain a better performance simply by training longer. 2git@github.com:yaoli/nade k.git 5 0 500 1000 1500 training epochs 80 85 90 95 100 105 110 115 120 training cost end of pretrain NADE-mask 1HL NADE-5 1HL NADE-1 1HL (a) 200 400 600 800 1000 1200 1400 1600 1800 2000 training epochs −100 −98 −96 −94 −92 −90 testset log-probability end of pretrain NADE-mask 1HL NADE-5 1HL NADE-1 1HL (b) Figure 3: NADE-k with k steps of variational inference helps to reduce the training cost (a) and to generalize better (b). NADE-mask performs better than NADE-1 without masks both in training and test. 1 2 4 5 7 trained with k steps of iterations −96 −95 −94 −93 −92 −91 −90 −89 −88 −87 testset log-probability NADE-k 1HL NADE-k 2HL NADE-mask 1HL NADE-mask 2HL (a) 0 5 10 15 20 perform k steps of iterations at test time −115 −110 −105 −100 −95 −90 −85 testset log-probability NADE-5 2HL NADE-mask 2HL (b) Figure 4: (a) The generalization performance of different NADE-k models trained with different k. (b) The generalization performance of NADE-5 2h, trained with k=5, but with various k in test time. Fig. 4 (a) shows the effect of the number of iterations k during training. Already with k = 2, we can see that the NADE-k outperforms its corresponding NADE-mask. The performance increases until k = 5. We believe the worse performance of k = 7 is due to the well known training difficulty of a deep neural network, considering that NADE-7 with two hidden layers effectively is a deep neural network with 21 layers. At inference time, we found that it is important to use the exact k that one used to train the model. As can be seen from Fig. 4 (b), the assigned probability increases up to the k, but starts decreasing as the number of iterations goes over the k. 3 3.1.1 Qualitative Analysis In Fig. 2, we present how each iteration t = 1 . . . k improves the corrupted input (v⟨t⟩from Eq. (5)). We also investigate what happens with test-time k being larger than the training k = 5. We can see that in all cases, the iteration – which is a fixed point update – seems to converge to a point that is in most cases close to the ground-truth sample. Fig. 4 (b) shows however that the generalization performance drops after k = 5 when training with k = 5. From Fig. 2, we can see that the reconstruction continues to be sharper even after k = 5, which seems to be the underlying reason for this phenomenon. 3In the future, one could explore possibilities for helping better converge beyond step k, for instance by using costs based on reconstructions at k −1 and k even in the fine-tuning phase. 6 (a) MNIST (b) Caltech-101 Silhouettes Figure 5: Samples generated from NADE-k trained on (a) MNIST and (b) Caltech-101 Silhouettes. (a) (b) Figure 6: Filters learned from NADE-5 2HL. (a) A random subset of the encodering filters. (b) A random subset of the decoding filters. From the samples generated from the trained NADE-5 with two hidden layers shown in Fig. 5 (a), we can see that the model is able to generate digits. Furthermore, the filters learned by the model show that it has learned parts of digits such as pen strokes (See Fig. 6). 3.1.2 Variability over Orderings In Section 2, we argued that we can perform any inference task p(xmis | xobs) easily and efficiently by restricting the set of orderings O in Eq. (9) to ones where xobs is before xmis. For this to work well, we should investigate how much the different orderings vary. To measure the variability over orderings, we computed the variance of log p(x | o) for 128 randomly chosen orderings o with the trained NADE-k’s and NADE-mask with a single hidden layer. For comparison, we computed the variance of log p(x | o) over the 10,000 test samples. log p(x | o) Eo,x [·] p Ex Varo [·] p Eo Varx [·] NADE-mask 1HL -92.17 3.5 23.5 NADE-5 1HL -90.02 3.1 24.2 NADE-5 2HL -87.14 2.4 22.7 Table 2: The variance of log p(x | o) over orderings o and over test samples x. In Table 2, the variability over the orderings is clearly much smaller than that over the samples. Furthermore, the variability over orderings tends to decrease with the better models. 3.2 Caltech-101 silhouettes We also evaluate the proposed NADE-k on Caltech-101 Silhouettes (Marlin et al., 2010), using the standard split of 4100 training samples, 2264 validation samples and 2307 test samples. We demonstrate the advantage of NADE-k compared with NADE-mask under the constraint that they have a matching number of parameters. In particular, we compare NADE-k with 1000 hidden units with NADE-mask with 670 hiddens. We also compare NADE-k with 4000 hidden units with NADE-mask with 2670 hiddens. We optimized the hyper-parameter k ∈{1, 2, . . . , 10} in the case of NADE-k. In both NADE-k and NADE-mask, we experimented without regularizations, with weight decays, or with dropout. Unlike the previous experiments, we did not use the pretraining scheme (See Eq. (10)). 7 Table 3: Average log-probabilities of test samples of Caltech-101 Silhouettes. (⋆) The results are from Cho et al. (2013). The terms in the parenthesis indicate the number of hidden units, the total number of parameters (M for million), and the L2 regularization coefficient. NADE-mask 670h achieves the best performance without any regularizations. Model Test LL Model Test LL RBM⋆ (2000h, 1.57M) -108.98 RBM ⋆ (4000h, 3.14M) -107.78 NADE-mask (670h, 1.58M) -112.51 NADE-mask (2670h, 6.28M, L2=0.00106) -110.95 NADE-2 (1000h, 1.57M, L2=0.0054) -108.81 NADE-5 (4000h, 6.28M, L2=0.0068) -107.28 As we can see from Table 3, NADE-k outperforms the NADE-mask regardless of the number of parameters. In addition, NADE-2 with 1000 hidden units matches the performance of an RBM with the same number of parameters. Futhermore, NADE-5 has outperformed the previous best result obtained with the RBMs in (Cho et al., 2013), achieving the state-of-art result on this dataset. We can see from the samples generated by the NADE-k shown in Fig. 5 (b) that the model has learned the data well. 4 Conclusions and Discussion In this paper, we proposed a model called iterative neural autoregressive distribution estimator (NADE-k) that extends the conventional neural autoregressive distribution estimator (NADE) and its order-agnostic training procedure. The proposed NADE-k maintains the tractability of the original NADE while we showed that it outperforms the original NADE as well as similar, but intractable generative models such as restricted Boltzmann machines and deep belief networks. The proposed extension is inspired from the variational inference in probabilistic models such as restricted Boltzmann machines (RBM) and deep Boltzmann machines (DBM). Just like an iterative mean-field approximation in Boltzmann machines, the proposed NADE-k performs multiple iterations through hidden layers and a visible layer to infer the probability of the missing value, unlike the original NADE which performs the inference of a missing value in a single iteration through hidden layers. Our empirical results show that this approach of multiple iterations improves the performance of a model that has the same number of parameters, compared to performing a single iteration. This suggests that the inference method has significant effect on the efficiency of utilizing the model parameters. Also, we were able to observe that the generative performance of NADE can come close to more sophisticated models such as deep belief networks in our approach. In the future, more in-depth analysis of the proposed NADE-k is needed. For instance, a relationship between NADE-k and the related models such as the RBM need to be both theoretically and empirically studied. The computational speed of the method could be improved both in training (by using better optimization algorithms. See, e.g., (Pascanu and Bengio, 2014)) and in testing (e.g. by handling the components in chunks rather than fully sequentially). The computational efficiency of sampling for NADE-k can be further improved based on the recent work of Yao et al. (2014) where an annealed Markov chain may be used to efficiently generate samples from the trained ensemble. Another promising idea to improve the model performance further is to let the model adjust its own confidence based on d. For instance, in the top right corner of Fig. 2, we see a case with lots of missing values values (low d), where the model is too confident about the reconstructed digit 8 instead of the correct digit 2. Acknowledgements The authors would like to acknowledge the support of NSERC, Calcul Qu´ebec, Compute Canada, the Canada Research Chair and CIFAR, and developers of Theano (Bergstra et al., 2010; Bastien et al., 2012). 8 References Bastien, F., Lamblin, P., Pascanu, R., Bergstra, J., Goodfellow, I. J., Bergeron, A., Bouchard, N., and Bengio, Y. (2012). Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop. Bengio, Y. and Bengio, S. (2000). Modeling high-dimensional discrete data with multi-layer neural networks. In NIPS’99, pages 400–406. MIT Press. Bengio, Y., Mesnil, G., Dauphin, Y., and Rifai, S. (2013). Better mixing via deep representations. In Proceedings of the 30th International Conference on Machine Learning (ICML’13). ACM. Bergstra, J., Breuleux, O., Bastien, F., Lamblin, P., Pascanu, R., Desjardins, G., Turian, J., WardeFarley, D., and Bengio, Y. (2010). Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference (SciPy). Oral Presentation. Brakel, P., Stroobandt, D., and Schrauwen, B. (2013). Training energy-based models for time-series imputation. The Journal of Machine Learning Research, 14(1), 2771–2797. Cho, K., Raiko, T., and Ilin, A. (2013). Enhanced gradient for training restricted boltzmann machines. Neural computation, 25(3), 805–831. Domke, J. (2011). Parameter learning with truncated message-passing. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 2937–2943. IEEE. Goodfellow, I., Mirza, M., Courville, A., and Bengio, Y. (2013). Multi-prediction deep boltzmann machines. In Advances in Neural Information Processing Systems, pages 548–556. Gregor, K., Danihelka, I., Mnih, A., Blundell, C., and Wierstra, D. (2014). Deep autoregressive networks. In International Conference on Machine Learning (ICML’2014). Heckerman, D., Chickering, D. M., Meek, C., Rounthwaite, R., and Kadie, C. (2000). Dependency networks for inference, collaborative filtering, and data visualization. Journal of Machine Learning Research, 1, 49–75. Hinton, G. E. (2000). Training products of experts by minimizing contrastive divergence. Technical Report GCNU TR 2000-004, Gatsby Unit, University College London. Huang, F. and Ogata, Y. (2002). Generalized pseudo-likelihood estimates for Markov random fields on lattice. Annals of the Institute of Statistical Mathematics, 54(1), 1–18. Larochelle, H. and Murray, I. (2011). The neural autoregressive distribution estimator. Journal of Machine Learning Research, 15, 29–37. Marlin, B., Swersky, K., Chen, B., and de Freitas, N. (2010). Inductive principles for restricted Boltzmann machine learning. In Proceedings of The Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS’10), volume 9, pages 509–516. Pascanu, R. and Bengio, Y. (2014). Revisiting natural gradient for deep networks. In International Conference on Learning Representations 2014(Conference Track). Peterson, C. and Anderson, J. R. (1987). A mean field theory learning algorithm for neural networks. Complex Systems, 1(5), 995–1019. Stoyanov, V., Ropson, A., and Eisner, J. (2011). Empirical risk minimization of graphical model parameters given approximate inference, decoding, and model structure. In International Conference on Artificial Intelligence and Statistics, pages 725–733. Uria, B., Murray, I., and Larochelle, H. (2014). A deep and tractable density estimator. In Proceedings of the 30th International Conference on Machine Learning (ICML’14). Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., and Manzagol, P.-A. (2010). Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Machine Learning Res., 11. Yao, L., Ozair, S., Cho, K., and Bengio, Y. (2014). On the equivalence between deep nade and generative stochastic networks. In European Conference on Machine Learning (ECML/PKDD’14). Springer. Zeiler, M. D. (2012). ADADELTA: an adaptive learning rate method. Technical report, arXiv 1212.5701. 9
|
2014
|
128
|
5,212
|
Reducing the Rank of Relational Factorization Models by Including Observable Patterns Maximilian Nickel1,2 Xueyan Jiang3,4 Volker Tresp3,4 1LCSL, Poggio Lab, Massachusetts Institute of Technology, Cambridge, MA, USA 2Istituto Italiano di Tecnologia, Genova, Italy 3Ludwig Maximilian University, Munich, Germany 4Siemens AG, Corporate Technology, Munich, Germany mnick@mit.edu, {xueyan.jiang.ext,volker.tresp}@siemens.com Abstract Tensor factorization has become a popular method for learning from multirelational data. In this context, the rank of the factorization is an important parameter that determines runtime as well as generalization ability. To identify conditions under which factorization is an efficient approach for learning from relational data, we derive upper and lower bounds on the rank required to recover adjacency tensors. Based on our findings, we propose a novel additive tensor factorization model to learn from latent and observable patterns on multi-relational data and present a scalable algorithm for computing the factorization. We show experimentally both that the proposed additive model does improve the predictive performance over pure latent variable methods and that it also reduces the required rank — and therefore runtime and memory complexity — significantly. 1 Introduction Relational and graph-structured data has become ubiquitous in many fields of application such as social network analysis, bioinformatics, and artificial intelligence. Moreover, relational data is generated in unprecedented amounts in projects like the Semantic Web, YAGO [27], NELL [4], and Google’s Knowledge Graph [5] such that learning from relational data, and in particular learning from large-scale relational data, has become an important subfield of machine learning. Existing approaches to relational learning can approximately be divided into two groups: First, methods that explain relationships via observable variables, i.e. via the observed relationships and attributes of entities, and second, methods that explain relationships via a set of latent variables. The objective of latent variable models is to infer the states of these hidden variables which, once known, permit the prediction of unknown relationships. Methods for learning from observable variables cover a wide range of approaches, e.g. inductive logic programming methods such as FOIL [23], statistical relational learning methods such as Probabilistic Relational Models [6] and Markov Logic Networks [24], and link prediction heuristics based on the Jaccard’s Coefficient and the Katz Centrality [16]. Important examples of latent variable models for relational data include the IHRM and the IRM [29, 10], the Mixed Membership Stochastic Blockmodel [1] and low-rank matrix factorizations [16, 26, 7]. More recently, tensor factorization, a generalization of matrix factorization to higher-order data, has shown state-of-the-art results for relationship prediction on multi-relational data [21, 8, 2, 13]. The number of latent variables in tensor factorization is determined via the number of latent components used in the factorization, which in turn is bounded by the factorization rank. While tensor and matrix factorization algorithms scale typically well with the size of the data — which is one reason for their appeal — they often do not scale well with respect to the rank of the factorization. For instance, RESCAL is a state-of-the art relational learning method based on tensor factorization which can be applied to large knowledge bases consisting of millions of entities and billions of known facts [22]. 1 However, while the runtime of the most scalable known algorithm to compute RESCAL scales linearly with the number of entities, linearly with the number of relations, and linearly with the number of known facts, it scales cubical with regard to the rank of the factorization [22].1 Moreover, the memory requirements of tensor factorizations like RESCAL become quickly infeasible on large data sets if the factorization rank is large and no additional sparsity of the factors is enforced. Hence, tensor (and matrix) rank is a central parameter of factorization methods that determines generalization ability as well as scalability. In this paper we study therefore how the rank of factorization methods can be reduced while maintaining their predictive performance and scalability. We first analyze under which conditions tensor and matrix factorization requires high or low rank on relational data. Based on our findings, we then propose an additive tensor decomposition approach to reduce the required rank of the factorization by combining latent and observable variable approaches. This paper is organized as follows: In section 2 we develop the main theoretical results of this paper, where we show that the rank of an adjacency tensor is lower bounded by the maximum number of strongly connected components of a single relation and upper bounded by the sum of diclique partition numbers of all relations. Based on our theoretical results, we propose in section 3 a novel tensor decomposition approach for multi-relational data and present a scalable algorithm to compute the decomposition. In section 4 we evaluate our model on various multi-relational datasets. Preliminaries We will model relational data as a directed graph (digraph), i.e. as an ordered pair Γ “ pV,Eq of a nonempty set of vertices V and a set of directed edges E Ď V ˆ V. An existing edge between node vi and vj will be denoted by vi ⇝vj. By a slight abuse of notation, ΓpYq will indicate the digraph Γ associated with an adjacency matrix Y P t0,1uNˆN. Next, we will briefly review further concepts of tensor and graph theory that are important for the course of this paper. Definition 1. A strongly connected component of a digraph Γ is a maximal subgraph Ψ for which every vertex is reachable from any other vertex in Ψ by following the directional edges in the subgraph. A strongly connected component is trivial if it consists only of a single element, i.e. if it is of the form Ψ “ ptviu,Hq, and nontrivial otherwise. We will denote the number of strongly connected components in a digraph Γ by sccpΓq. The number of nontrivially connected components will be denoted by scc`pΓq. Definition 2. A digraph Γ “ pV,Eq is a diclique if it is an orientation of a complete undirected bipartite graph with bipartition pV1,V2q such that v1 P V1 and v2 P V2 for every edge v1 ⇝v2 P E. Figure 3 in supplementary material A shows an example of a diclique. Please note that dicliques consist only of trivially strongly connected components, as there cannot exist any cycles in a diclique. Given the concept of a diclique, the diclique partitioning number of a digraph is defined as: Definition 3. The diclique partition number dppΓq of a digraph Γ “ pV,Eq is the minimum number of dicliques such that each edge e P E is contained in exactly one diclique. Tensors can be regarded as higher-order generalizations of vectors and matrices. In the following, we will only consider third-order tensors of the form X P RIˆJˆK, although many concepts generalize to higher-order tensors. The mode-n unfolding (or matricization) of X arranges the mode-n fibers of X as the columns of a newly formed matrix and will be denoted by Xpnq. The tensor-matrix product A “ X ˆn B multiplies the tensor X with the matrix B along the n-th mode of X such that Apkq “ BXpkq. For a detailed introduction to tensors and these operations we refer the reader to Kolda et al. [12]. The k-th frontal slice of a third-order tensor X P RIˆJˆK will be denoted by Xk P RIˆJ. The outer product of vectors will be denoted by a ˝ b. In contrast to matrices, there exist two non-equivalent notions of the rank of a tensor: Definition 4. Let X P RIˆJˆK be a third-order tensor. The tensor rank t-rankpXq of X is defined as t-rankpXq “ min tr | X “ řr i“1 ai ˝ bi ˝ ciu where ai P RI, bi P RJ, and ci P RK. The multilinear rank n-rankpXq of X is defined as the tuple pr1,r2,r3q, where ri “ rank ` Xpiq ˘ . To model multi-relational data as tensors, we use the following concept of an adjacency tensor: Definition 5. Let G “ tpV,EkquK k“1 be a set of digraphs over the same set of vertices V, where |V| “ N. The adjacency tensor of G is a third-order tensor X P t0,1uNˆNˆK with entries xi jk “ 1 if vi ⇝vj P Ek and xi jk “ 0 otherwise. 1Similar results can be obtained for state-of-the-art algorithms to compute the well-known CP and Tucker decompositions. Please see the supplementary material A.3 for the respective derivations. 2 For a single digraph, an adjacency tensor is equivalent to the digraph’s adjacency matrix. Note that K would correspond to the number of relation types in a domain. 2 On the Algebraic Complexity of Graph-Structured Data In this section, we want to identify conditions under which tensor factorization can be considered efficient for relational learning. Let X denote an observed adjacency tensor with missing or noisy entries from which we seek to recover the true adjacency tensor Y. Rank affects both the predictive as well as the runtime performance of a factorization: A high factorization rank will lead to poor runtime performance while a low factorization rank might not be sufficient to model Y. We are therefore interested in identifying upper and lower bounds on the minimal rank — either tensor rank or multilinear rank — that is required such that a factorization can model the true adjacency tensor Y. Please note that we are not concerned with bounds on the generalization error or the sample complexity that is needed to learn a good model, but on bounds on the algebraic complexity that is needed to express the true underlying data via factorizations. For sign-matrices Y P t˘1uNˆN, this question has been discussed in combinatorics and communication complexity via their sign-rank rank˘pYq, which is the minimal rank needed to recover the sign-pattern of Y: rank˘pYq “ min MPRNˆN ␣ rankpMq ˇˇ @i, j : sgnpmi jq “ yi j ( . (1) Although the concept of sign-rank can be extended to adjacency tensors, bounds based on the signrank would have only limited significance for our purpose, as no practical algorithms exist to find the solution to equation (1). Instead, we provide upper and lower bounds on tensor and multilinear rank, i.e. bounds on the exact recovery of Y, for the following reasons: It follows immediately from (1) that any upper-bound on rankpYq will also hold for rank˘pYq since it has to hold that rank˘pYq ď rankpYq. Upper bounds on rankpYq can therefore provide insight under what conditions factorizations can be efficient on relational data — regardless whether we seek to recover exact values or sign patterns. Lower bounds on rankpYq provide insight under what conditions the exact recovery of Y can be inefficient. Furthermore, it can be observed empirically that lower bounds on the rank are more informative for existing factorization approaches to relational learning like [21, 13, 16] than bounds on sign-rank. For instance, let Sn “ 2In ´ Jn be the “signed identity matrix” of size n, where In denotes the n ˆ n identity matrix and Jn denotes the n ˆ n matrix of all ones. While it is known that rank˘pSnq “ Op1q for any size n [17], it can be checked empirically that SVD requires a rank larger than n 2 , i.e. a rank of Opnq, to recover the sign pattern of Sn. Based on these considerations, we state now the main theorem of this paper, which bounds the different notions of the rank of an adjacency tensor by the diclique partition number and the number of strongly connected components of the involved relations: Theorem 1. Tensor rank t-rankpYq and multilinear rank n-rankpYq “ pr1,r2,r3q of any adjacency tensor Y P t0,1uNˆNˆK representing K relations tΓkpYkquK k“1 are bounded as ÿK k“1 dppΓkq ě θ ě max k scc`pΓkq, where θ is any of the quantities t-rankpYq, r1, or r2. To prove theorem 1 we will first derive upper and lower bounds on adjacency matrices and then show how these bounds generalize to adjacency tensors. Lemma 1. For any adjacency matrix Y P t0,1uNˆN it holds that dppΓq ě rankpYq ě scc`pΓq. Proof. The upper bound of lemma 1 follows directly from the fact that dppΓpYqq “ rankNpYq and the fact that rankNpYq ě rankpYq, where rankNpYq denotes the non-negative integer rank of the binary matrix Y [19, see eq. 1.6.5 and eq. 1.7.1]. □ Next we will prove the lower bound of lemma 1. Let λipYq denote the i-th (complex) eigenvalue of Y and let ΛpYq denote the spectrum of Y P RNˆN, i.e. the multiset of (complex) eigenvalues of Y. Furthermore, let ρpYq “ maxi |λipYq| be the spectral radius of Y. Now, recall the celebrated Perron-Frobenius theorem: Theorem 2 ([25, Theorem 8.2]). Let Y P RNˆN with yi j ě 0 be a non-negative irreducible matrix. Then ρpYq ą 0 is a simple eigenvalue of Y associated with a positive eigenvector. 3 Please note that a nontrivial digraph is strongly connected iff its adjacency matrix is irreducible [3, Theorem 3.2.1]. Furthermore, an adjacency matrix is nilpotent iff the associated digraph is acyclic [3, Section 9.8]. Hence, the adjacency matrix of a strongly connected component Ψ is nilpotent iff Ψ is trivial. Given these considerations, we can now prove the lower bound of lemma 1: Lemma 2. For any non-negative adjacency matrix Y P RNˆN with yi j ě 0 of a weighted digraph Γ it holds that rankpYq ě scc`pΓq. Proof. Let Γ consist of k nontrivial strongly connected components. The Frobenius normal form B of its associated adjacency matrix Y consists then of k irreducible matrices Bi on its block diagonal. It follows from theorem 2 that each irreducible Bi has at least one nonzero eigenvalue. Since B is block upper triangular, it holds also that ΛpBq “ Ťk i“1 ΛpBiq. As the rank of a square matrix is larger or equal to the number of its nonzero eigenvalues, it follows that rankpBq ě k. Lemma 2 follows from the fact that B is similar to Y and that matrix similarity preserves rank. □ So far, we have shown that rankpYq of an adjacency matrix Y is bounded by the diclique covering number and the number of nontrivial strongly connected components of the associated digraph. To complete the proof of theorem 1 we will now show that these bounds for unirelational data translate directly to multi-relational data and to the different notions of the rank of an adjacency tensor. In particular we will show that both notions of tensor rank are lower bounded by the maximum rank of a single frontal slice in the tensor and upper bounded by the sum of the ranks of all frontal slices: Lemma 3. The tensor rank t-rankpYq and multilinear rank n-rankpYq “ pr1,r2,r3q of any third-order tensor Y P RIˆJˆK with frontal slices Yk are bounded as ÿK k“1 rankpYkq ě θ ě max k rankpYkq, where θ is any of the quantities t-rankpYq, r1, or r2. Proof. Due to space constraints, we will include only the proof for tensor rank. The proof for multilinear rank can be found in supplementary material A.1. Let t-rankpYq “ r and rankpYkq “ rmax. It can be seen from the definition of tensor rank that Yk “ řr i“1 ckrparbJ r q. Consequently, it follows from the subadditivity of matrix rank, i.e. rankpA ` Bq ď rankpAq ` rankpBq, that rmax “ rank `řr i“1 ckrarbJ r ˘ ď řr i“1 rank ` ckrarbJ r ˘ ď r where the last inequality follows from rank ` ckrarbJ r ˘ ď 1. Now we will derive the upper bound of lemma 3 by providing a decomposition of Y with rank r “ ř k rankpYkq that recovers Y exactly. Let Yk “ Uk SkV J k be the SVD of Yk with Sk “ diagpskq. Furthermore, let U “ rU1 U2 ¨ ¨ ¨ UKs, V “ rV1 V2 ¨ ¨ ¨ VKs, and let S be a block-diagonal matrix where the i-th block on the diagonal is equal to sJ i and all other entries are 0. It can be easily verified that řr i“1 ˆui ˝ ˆvi ˝ ˆsi provides an exact decomposition of Y, where r “ ř k rankpYkq and ˆui, ˆvi, and ˆsi are the i-th columns of the matrices U, V, and S. The inequality in lemma 3 follows since r is not necessarily minimal. □ Theorem 1 can now be derived by combining lemmas 1 and 3 what concludes the proof. Discussion It can be seen from theorem 1 that factorizations can be computationally efficient when ř k dppΓkq is small. However, factorizations can potentially be inefficient when scc`pΓkq is large for any Γk in the data. For instance, consider an idealized marriedTo relation, where each person is married to exactly one person. Evidently, for m marriages, the associated digraph would consist of m strongly connected components, i.e. one component for each marriage. According to lemma 2, a factorization model would at least require m latent components to recover this adjacency matrix exactly. Consequently, an algorithm with cubic runtime complexity in the rank would only be able to recover Y for this relation when the number of marriages is small, what limits its applicability to these relations. A second important observation for multi-relational learning is that the lower bound in theorem 1 depends only on the largest rank of a single frontal slice (i.e. a single adjacency matrix) in Y. For multi-relational learning this means that regularities between different relations can not decrease tensor or multilinear rank below the largest matrix rank of a single relation. For instance, consider an N ˆ N ˆ 2 tensor Y where Y1 “ Y2. Clearly it holds that rankpYp3qq “ 1, such that Y1 could easily be predicted from Y2 when Y2 is known. However, theorem 1 states that the rank of the factorization must be at least rankpY1q — which can be arbitrarily large up to N — when 4 the first two modes of Y are also factorized. Please note that this is not a statement about sample complexity or generalization error which can be reduced when factorizing all modes of a tensor, but a statement about the minimal rank that is required to express the data. A last observation from the previous discussion is that factorizations and observable variable methods excel at different aspects of relationship prediction. For instance, predicting relationships in the idealized marriedTo relation can be done easily with Horn clauses and link predication heuristics as listed in supplementary material A.2. In contrast, factorization methods would be inefficient in predicting links in this relation as they would require at least one latent component for each marriage. At the same time, links in a diclique of any size can trivially be modeled with a rank-2 factorization that indicates the partition memberships, while standard neighborhood-based methods will fail on dicliques since — by the definition of a diclique — there do not exist links within one partition yet the only vertices that share neighbors are located in the same partition. 3 An Additive Relational Effects Model RESCAL is a state-of-the-art relational learning method that is based on a constrained Tuckerdecomposition and as such is subject to bounds as in theorem 1. Motivated by the results of section 2, we propose an additive tensor decomposition approach to combine the strengths of latent and observable variable methods to reduce the rank requirements of RESCAL on multi-relational data. To include the information of observable pattern methods in the factorization, we augment the RESCAL model with an additive term that holds the predictions of observable pattern methods. In particular, let X P t0,1uNˆNˆK be a third-order adjacency tensor and M P RNˆNˆP be a third-order tensor that holds the predictions of an arbitrary number of relational learning methods. The proposed additive relational effects model (ARE) decomposes X into X « R ˆ1 A ˆ2 A ` M ˆ3 W, (2) where A P RNˆr, R P RrˆrˆK and W P RKˆP. The first term of equation (2) corresponds to the RESCAL model which can be interpreted as following: The matrix A holds the latent variable representations of the entities, while each frontal slice Rk of R is an asymmetric r ˆ r matrix that models the interactions of the latent components for the k-th relation. The variable r denotes the number of latent components of the factorization. An important aspect of RESCAL for relational learning is that entities have a unique latent representation via the matrix A. This enables a relational learning effect via the propagation of information over different relations and the occurrences of entities as a subject or objects in relationships. For a detailed description of RESCAL we refer the reader to Nickel et al. [21, 22]. After computing the factorization (2), the score for the existence of a single relationship is calculated in ARE via pxi jk “ aT i Rkaj ` řP p“1 wk pmi j p. The construction of the tensor M is of the following: Let F “ t f puP p“1 be a set of given real-valued functions f p : V ˆ V Ñ R which assign scores to each pair of entities in V. Examples of such score functions include link prediction heuristics such as Common Neighbors, Katz Centrality, or Horn clauses. Depending on the underlying model these scores can be interpreted as confidences value or as probabilities that a relationship exists between two entities. We collect these real-valued predictions of P score functions in the tensor M P RNˆNˆP by setting mi j p “ f ppvi,vjq. Supplementary material A.2 provides a detailed description of the construction of M for typical score functions. The tensor M acts in the factorization as an independent source of information that predicts the existence of relationships. The term M ˆ3 W can be interpreted as learning a set of weights wk p which indicate how much the p-th score function in M correlates with the k-th relation in X. For this reason we refer to M also as the oracle tensor. If M is composed of relation path features as proposed by Lao et al. [15], the term MW is closely related to the Path Ranking Algorithm (PRA) [15]. The main idea of equation (2) is the following: The term R ˆ1 A ˆ2 A is equivalent to the RESCAL model and provides an efficient approach to learn from latent patterns on relational data. The oracle tensor M on the other hand is not factorized, such that it can hold information that is difficult to predict via latent variable methods. As it is not clear a priori which score functions are good predictors for which relations, the term M ˆ3 W learns a weighting of how predictive any score function is for any relation. By integrating both terms in an additive model, the term M ˆ3 W can potentially reduce the required rank for the RESCAL term by explaining links that, for instance, reduce the diclique partition number of a digraph. Rules and operations that are likely to reduce the diclique partition 5 number of slices in X are therefore good candidates to be included in M. For instance, by including a copy of the observed adjacency tensor X in M (or some selected frontal slices Xk), the term M ˆ3 W can easily model common multi-relational patterns where the existence of a relationship in one relation correlates with the existence of a relationship between the same entities in another relation via xi jk “ ř p‰k wk pxi j p. Since wk p is allowed to be negative, anti-correlations can be modeled efficiently. ARE is similar in spirit to the model of Koren [14], which extends SVD with additive terms to include local neighborhood information in an uni-relational recommendation setting and Jiang et al. [9] which uses an additive matrix factorization model for link prediction. Furthermore, the recently proposed Google Knowledge Vault (KV) [5] considers a combination of PRA and a neural network model related to RESCAL for learning from large multi-relational datasets. However, in KV both models are trained separately and combined only later in a separate fusion step, whereas ARE learns both models jointly what leads to the desired rank-reduction effect. To compute ARE, we pursue a similar optimization scheme as used for RESCAL which has been shown to scale to large datasets [22]. In particular, we solve the regularized optimization problem min A,R,W }X ´ pR ˆ1 A ˆ2 A ` M ˆ3 Wq}2 F ` λ A}A}2 F ` λR}R}2 F ` λW }W}2 F. (3) via alternating least-squares, which is a block-coordinate optimization method in which blocks of variables are updated alternatingly until convergence. For equation (3) the variable blocks are given naturally by the factors A, R, and W. Updates for W Let E “ pX ´ R ˆ1 A ˆ2 Aq and I be the identity matrix. We rewrite equation (2) as Ep3q « W Mp3q such that equation (3) becomes a regularized least-squares problem when solving for W. It follows that updates for W can be computed via W Ð pMp3qMJ p3q ` λW Iq´1Mp3qEJ p3q. However, performing the updates in this way would be very inefficient as it involves the computation of the dense N ˆ N ˆ K tensor R ˆ1 A ˆ2 A. This would quickly lead to scalability issues with regard to runtime and memory requirements. To overcome this issue, we rewrite Mp3qEJ p3q using the equality pR ˆ1 A ˆ2 Aqp3qMJ p3q “ Rp3qpM ˆ1 AJ ˆ2 AJqJ p3q. Updates for W can then be computed efficiently as W J Ð ” Xp3qMJ p3q ´ Rp3qpM ˆ1 AJ ˆ2 AJqJ p3q ı pMp3qMJ p3q ` λW Iq´1. (4) In equation (4) the dense tensor R ˆ1 A ˆ2 A is never computed explicitly and the computational complexity with regard to the parameters N, K, and r is reduced from OpN2Krq to OpNKr3q. Furthermore, all terms in equation (4) except Rp3qpM ˆ1 AJ ˆ2 AJqJ p3q are constant and have only to be computed once at the beginning of the algorithm. Finally, Xp3qMJ p3q and Mp3qMJ p3q are the products of sparse matrices such that their computational complexity depends only on the number of nonzeros in X or M. A full derivation of equation (4) can be found in the supplementary material A.4. Updates for A and R The updates for A and R can be derived directly from the RESCAL-ALS algorithm by setting E “ X ´ M ˆ3 W and computing the RESCAL factorization of E. The updates for A can therefore be computed by: A Ð ´ÿK k“1 Ek ARJ k ` EJ k ARk ¯ ´ÿK k“1 Rk AJARJ k ` RJ k AJARk ` λI ¯´1 where Ek “ Xk ´ M ˆ3 wk and wk denotes the k-th row of W. The updates of R can be computed in the following way: Let A “ UΣV J be the SVD of A, where σi is the i-th singular value of A. Furthemore, let S be a matrix with entries si j “ σiσj{pσ2 i σ2 j ` λRq. An update of Rk can then be computed via Rk Ð V ` S ˚ pUJpXk ´ M ˆ3 wkqUq ˘ V J, where “˚” denotes the Hadamard product. For a full derivation of these updates please see [20]. 4 Evaluation We evaluated ARE on various multi-relational datasets where we were in particular interested in its generalization ability relative to the factorization rank. For comparison, we included the well-known 6 10 20 30 40 50 60 70 80 90 100 10 20 30 40 50 60 70 80 90 100 Rank Aera under Precision−Recall Curve CP Tucker MW RESCAL ARE (a) Kinships 5 10 15 20 25 30 55 60 65 70 75 80 85 90 95 Rank CP Tucker MW RESCAL ARE (b) PoliticalDiscussant 5 10 15 20 25 30 45 50 55 60 65 70 75 80 Rank CP Tucker MW RESCAL ARE (c) CloseFriend 5 10 15 20 25 30 80 85 90 95 100 Rank Aera under Precision−Recall Curve CP Tucker MW RESCAL ARE (d) BlogLiveJournalTwitter 5 10 15 20 25 30 70 75 80 85 90 95 Rank CP Tucker MW RESCAL ARE (e) SocializeTwicePerWeek 5 10 15 20 25 30 90 95 100 Rank CP Tucker MW RESCAL ARE (f) FacebookAllTaggedPhotos Figure 1: Evaluation results for AUC-PR on the Kinships (1a) and Social Evolution data sets (1b-1f). CP and Tucker tensor factorizations in the evaluation, as well as RESCAL and the non-latent model X « Mˆ3 W (in the following denoted by MW). In all experiments, the oracle tensor M used in MW and ARE is identical, such that the results of MW can be regarded as a baseline for the contribution of the heuristic methods to ARE. Following [10, 11, 28, 21] we used k-fold cross-validation for the evaluation, partitioning the entries of the adjacency tensor into training, validation, and test sets. In the test and validation folds all entries are set to 0. Due to the large imbalance of true and false relationships, we used the area under the precision-recall curve (AUC-PR) to measure predictive performance, which is known to behave better with imbalanced classes then AUC-ROC. All AUC-PR results are averaged over the different test-folds. Links and references for the datasets used in the evaluation are provided in the supplementary material A.5. Social Evolution First, we evaluated ARE on a dataset consisting of multiple relations of persons living in an undergraduate dormitory. From the relational data, we constructed a 84ˆ84ˆ5 adjacency tensor where two modes correspond to persons and the third mode represents the relations between these persons such as friendship (CloseFriend), social media interaction (BlogLivejournalTwitter and FacebookAllTaggedPhotos), political discussion (PoliticalDiscussant), and social interaction (SocializeTwicePerWeek). For each relation, we performed link prediction via 5-fold cross validation. The oracle tensor M consisted only of a copy of the observed tensor X. Including X in M allows ARE to efficiently exploit patterns where the existence of a social relationship for a particular pair of persons is predictive for other social interactions between exactly this pair of persons (e.g. close friends are more likely to socialize twice per week). It can be seen from the results in figure 1(b ´ f ) that ARE achieves better performance than all competing approaches and already achieves excellent performance at a very low rank, what supports our theoretical considerations. Kinship The Kinship dataset describes the kinship relations in the Australian Alyawarra tribe in terms of 26 kinship relations between 104 persons. The task in the experiment was to predict unknown kinship relations via 10-fold cross validation in the same manner as in [21]. Table 1 shows the improvement of ARE over state-of-the-art relational learning methods. Figure 1a shows the predictive performance compared to the rank of multiple factorization methods. It can be seen that ARE outperforms all other methods significantly for lower rank. Moreover, starting from rank 40 ARE gives already comparable results to the best results in table 1. As in the previous experiments, M consisted only of a copy of X. On this dataset, the copy of X allows ARE to model efficiently that the relations in the data are mutually exclusive by setting wii ą 0 and wi j ă 0 for all i ‰ j. This also explains the large improvement of ARE over RESCAL for small ranks. 7 Link Prediction on Semantic Web Data The SWRC ontology models a research group in terms of people, publications, projects, and research interests. The task in our experiments was to predict the affiliation relation, i.e. to map persons to research groups. We followed the experimental setting in [18]: From the raw data, we created a 12058 ˆ 12058 ˆ 85 tensor by considering all directly connected entities of persons and research groups. In total, 168 persons and 5 research groups are considered in the evaluation data. The oracle tensor M consisted again of a copy of X and of the common neighbor heuristics Xi Xi and XJ i XJ i . These heuristics were included to model patterns like people who share the same research interest are likely in the same affiliation or a person is related to a department if the person belongs to a group in the department. We also imposed a sparsity penalty on W to prune away inactive heuristics during iterations. Table 2 shows that ARE improved the results significantly over three state-of-the-art link prediction methods for Semantic Web data. Moreover, whereas RESCAL required a rank of 45, ARE required only a small rank of 15. Figure 2: Runtime on Cora 10−1 100 101 102 Time (s) 0.70 0.72 0.74 0.76 0.78 0.80 0.82 0.84 nDCG RESCAL ARE Table 1: Evaluation Results on Kinships. MRC BCTF LFM RESCAL ARE [11] [28] [8] AUC 86 90 94.6 96 96.9 Rank (50,50,500) 100 90 Table 2: Evaluation results on SWRC. SVD Subtrees [18] RESCAL MW ARE nDCG 0.8 0.95 0.96 0.59 0.99 Runtime Performance To evaluate the trade-off between runtime and predictive performance we recorded the nDCG values of RESCAL and ARE after each iteration of the respective ALS algorithms on the Cora citation database. We used the variant of Cora in which all publications are organized in a hierarchy of topics with two to three levels and 68 leaves. The relational data consists of information about paper citations, authors and topics from which a tensor of size 28073ˆ28073ˆ3 is constructed. The oracle tensor consisted of a copy of X and the common neighbor patterns Xi Xj and XJ i XJ j to model patterns such that a cited paper shares the same topic, a cited paper shares the same author etc. The task of the experiment was to predict the leaf topic of papers by 5-fold cross-validation on a moderate PC with Intel(R) Core i5 @3.1GHz, 4G RAM. The optimal rank 220 for RESCAL was determined out of the range r10,300s via parameter selection. For ARE we used a significantly smaller rank 20. Figure 2 shows the runtime of RESCAL and ARE compared to their predictive performance. It is evident that ARE outperforms RESCAL after a few iterations although the rank of the factorization is decreased by an order of magnitude. Moreover, ARE surpasses the best prediction results of RESCAL in terms of total runtime even before the first iteration of RESCAL-ALS has terminated. 5 Concluding Remarks In this paper we considered learning from latent and observable patterns on multi-relational data. We showed analytically that the rank of adjacency tensors is upper bounded by the sum of diclique partition numbers and lower bounded by the maximum number of strongly connected components of any relation in the data. Based on our theoretical results, we proposed an additive tensor factorization approach for learning from multi-relational data which combines strengths from latent and observable variable methods. Furthermore we presented an efficient and scalable algorithm to compute the factorization. Experimentally we showed that the proposed approach does not only increase the predictive performance but is also very successful in reducing the required rank — and therefore also the required runtime — of the factorization. The proposed additive model is one option to overcome the rank-scalability problem outlined in section 2, however not the only one. In future work we intend to investigate to what extent sparse or hierarchical models can be used to the same effect. Acknowledgements Maximilian Nickel acknowledges support by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216. We thank Youssef Mroueh and Lorenzo Rosasco for clarifying discussions on the theoretical part of this paper. 8 References [1] E. M. Airoldi, D. M. Blei, S. E. Fienberg, and E. P. Xing. “Mixed Membership Stochastic Blockmodels”. In: Journal of Machine Learning Research 9 (2008), pp. 1981–2014. [2] A. Bordes, J. Weston, R. Collobert, and Y. Bengio. “Learning Structured Embeddings of Knowledge Bases”. In: Proceedings of the 25th Conference on Artificial Intelligence. 2011. [3] R. A. Brualdi and H. J. Ryser. Combinatorial Matrix Theory. 1991. [4] A. Carlson, J. Betteridge, B. Kisiel, B. Settles, Jr, and T. Mitchell. “Toward an Architecture for NeverEnding Language Learning”. In: AAAI. 2010, pp. 1306–1313. [5] X. L. Dong, K. Murphy, E. Gabrilovich, G. Heitz, W. Horn, N. Lao, T. Strohmann, S. Sun, and W. Zhang. “Knowledge Vault: A Web-Scale Approach to Probabilistic Knowledge Fusion”. In: Proceedings of the 20th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2014. [6] L. Getoor, N. Friedman, D. Koller, A. Pfeffer, and B. Taskar. “Probabilistic Relational Models”. In: Introduction to statistical relational learning. 2007, pp. 129–174. [7] P. D. Hoff. “Modeling homophily and stochastic equivalence in symmetric relational data”. In: Advances in Neural Information Processing Systems. Vol. 20. 2008, pp. 657–664. [8] R. Jenatton, N. Le Roux, A. Bordes, and G. Obozinski. “A latent factor model for highly multi-relational data”. In: Advances in Neural Information Processing Systems. Vol. 25. 2012, pp. 3176–3184. [9] X. Jiang, V. Tresp, Y. Huang, and M. Nickel. “Link Prediction in Multi-relational Graphs using Additive Models.” In: Proceedings of International Workshop on Semantic Technologies meet Recommender Systems & Big Data at the ISWC. Vol. 919. 2012, pp. 1–12. [10] C. Kemp, J. B. Tenenbaum, T. L. Griffiths, T. Yamada, and N. Ueda. “Learning systems of concepts with an infinite relational model”. In: AAAI. Vol. 3. 2006, p. 5. [11] S. Kok and P. Domingos. “Statistical Predicate Invention”. In: Proceedings of the 24th International Conference on Machine Learning. 2007, pp. 433–440. [12] T. G. Kolda and B. W. Bader. “Tensor Decompositions and Applications”. In: SIAM Review 51.3 (2009), pp. 455–500. [13] T. G. Kolda, B. W. Bader, and J. P. Kenny. “Higher-order web link analysis using multilinear algebra”. In: Proceedings of the Fifth International Conference on Data Mining. 2005, pp. 242–249. [14] Y. Koren. “Factorization meets the neighborhood: a multifaceted collaborative filtering model”. In: Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2008, pp. 426–434. [15] N. Lao and W. W. Cohen. “Relational retrieval using a combination of path-constrained random walks”. In: Machine learning 81.1 (2010), pp. 53–67. [16] D. Liben-Nowell and J. Kleinberg. “The link-prediction problem for social networks”. In: Journal of the American society for information science and technology 58.7 (2007), pp. 1019–1031. [17] N. Linial, S. Mendelson, G. Schechtman, and A. Shraibman. “Complexity measures of sign matrices”. In: Combinatorica 27.4 (2007), pp. 439–463. [18] U. Lösch, S. Bloehdorn, and A. Rettinger. “Graph Kernels for RDF Data”. In: The Semantic Web: Research and Applications - 9th Extended Semantic Web Conference, ESWC 2012. Vol. 7295. 2012, pp. 134–148. [19] S. D. Monson, N. J. Pullman, and R. Rees. “A survey of clique and biclique coverings and factorizations of (0,1)-matrices”. In: Bulletin of the ICA 14 (1995), pp. 17–86. [20] M. Nickel. “Tensor factorization for relational learning”. PhD thesis. LMU München, 2013. [21] M. Nickel, V. Tresp, and H.-P. Kriegel. “A Three-Way Model for Collective Learning on Multi-Relational Data”. In: Proceedings of the 28th International Conference on Machine Learning. 2011, pp. 809–816. [22] M. Nickel, V. Tresp, and H.-P. Kriegel. “Factorizing YAGO: scalable machine learning for linked data”. In: Proceedings of the 21st international conference on World Wide Web. 2012, pp. 271–280. [23] J. R. Quinlan. “Learning logical definitions from relations”. In: Machine Learning 5 (1990), pp. 239–266. [24] M. Richardson and P. Domingos. “Markov logic networks”. In: Machine Learning 62.1 (2006), pp. 107– 136. [25] D. Serre. Matrices: Theory and applications. Vol. 216. 2010. [26] A. P. Singh and G. J. Gordon. “Relational learning via collective matrix factorization”. In: Proc. of the 14th ACM SIGKDD International Conf. on Knowledge Discovery and Data Mining. 2008, pp. 650–658. [27] F. M. Suchanek, G. Kasneci, and G. Weikum. “Yago: A Core of Semantic Knowledge”. In: Proceedings of the 16th international conference on World Wide Web. 2007, pp. 697–706. [28] I. Sutskever, R. Salakhutdinov, and J. Tenenbaum. “Modelling Relational Data using Bayesian Clustered Tensor Factorization”. In: Advances in Neural Information Processing Systems 22. 2009, pp. 1821–1828. [29] Z. Xu, V. Tresp, K. Yu, and H.-P. Kriegel. “Infinite Hidden Relational Models”. In: Proc. of the TwentySecond Conference Annual Conference on Uncertainty in Artificial Intelligence. 2006, pp. 544–551. 9
|
2014
|
129
|
5,213
|
Global Sensitivity Analysis for MAP Inference in Graphical Models Jasper De Bock Ghent University, SYSTeMS Ghent (Belgium) jasper.debock@ugent.be Cassio P. de Campos Queen’s University Belfast (UK) c.decampos@qub.ac.uk Alessandro Antonucci IDSIA Lugano (Switzerland) alessandro@idsia.ch Abstract We study the sensitivity of a MAP configuration of a discrete probabilistic graphical model with respect to perturbations of its parameters. These perturbations are global, in the sense that simultaneous perturbations of all the parameters (or any chosen subset of them) are allowed. Our main contribution is an exact algorithm that can check whether the MAP configuration is robust with respect to given perturbations. Its complexity is essentially the same as that of obtaining the MAP configuration itself, so it can be promptly used with minimal effort. We use our algorithm to identify the largest global perturbation that does not induce a change in the MAP configuration, and we successfully apply this robustness measure in two practical scenarios: the prediction of facial action units with posed images and the classification of multiple real public data sets. A strong correlation between the proposed robustness measure and accuracy is verified in both scenarios. 1 Introduction Probabilistic graphical models (PGMs) such as Markov random fields (MRFs) and Bayesian networks (BNs) are widely used as a knowledge representation tool for reasoning under uncertainty. When coping with such a PGM, it is not always practical to obtain numerical estimates of the parameters—the local probabilities of a BN or the factors of an MRF—with sufficient precision. This is true even for quantifications based on data, but it becomes especially important when eliciting the parameters from experts. An important question is therefore how precise these estimates should be to avoid a degradation in the diagnostic performance of the model. This remains important even if the accuracy can be arbitrarily refined in order to trade it off with the relative costs. This paper is an attempt to systematically answer this question. More specifically, we address sensitivity analysis (SA) of discrete PGMs in the case of maximum a posteriori (MAP) inferences, by which we mean the computation of the most probable configuration of some variables given an observation of all others.1 Let us clarify the way we intend SA here, while giving a short overview of previous work on SA in PGMs. First of all, a distinction should be made between quantitative and qualitative SA. Quantitative approaches are supposed to evaluate the effect of a perturbation of the parameters on the numerical value of a particular inference. Qualitative SA is concerned with deciding whether or not the perturbed values are leading to a different decision, e.g., about the most probable configuration of the queried variable(s). Most of the previous work in SA is quantitative, being in particular focused on updating, i.e., the computation of the posterior probability of a single variable given some evidence, and mostly focus on BNs. After a first attempt based on a purely empirical investigation [17], a number of analytical methods based on the derivatives of the updated probability with respect to 1Some authors refer to this problem as MPE (most probable explanation) rather than MAP. 1 the perturbed parameters have been proposed [3, 4, 5, 11, 14]. Something similar has been done for MRFs as well [6]. To the best of our knowledge, qualitative SA received almost no attention, with few exceptions [7, 18]. Secondly, we distinguish between local and global SA. The former considers the effect of the perturbation of a single parameter (and of possible additional perturbations that are induced by normalization constraints), while the latter aims at more general perturbations possibly affecting all the parameters of the PGM. Initial work on SA in PGMs considered the local approach [4, 14], while later work considered global SA as well [3, 5, 11]. Yet, for BNs, global SA has been tackled by methods whose time complexity is exponential in the number of perturbed conditional probability tables (CPTs), as they basically require the computation of all the mixed derivatives. For qualitative SA, as far as we know, only the local approach has been studied [7, 18]. This is unfortunate, as global SA might reveal stronger effects of perturbations due to synergetic effects, which might remain hidden in a local analysis. In this paper, we study global qualitative SA in discrete PGMs for MAP inferences, thereby intending to fill the existing gap in this topic. Let us introduce it by a simple example. Example 1. Let X1 and X2 be two Boolean variables. For each i ∈{1, 2}, Xi takes values in {xi, ¬xi}. The following probabilistic assessments are available: P(x1) = .45, P(x2|x1) = .2, and P(x2|¬x1) = .9. This induces a complete specification of the joint probability mass function P(X1, X2). If no evidence is present, the MAP joint state is (¬x1, x2), its probability being .495. The second most probable joint state is (x1, ¬x2), whose probability is .36. We perturb the above three parameters. Given ϵx1 ≥0, we consider any assessment of P(x1) such that |P(x1) −.45| ≤ϵx1. We similarly perturb P(x2|x1) with ϵx2|x1 and P(x2|¬x1) with ϵx2|¬x1. The goal is to investigate whether or not (¬x1, x2) is also the unique MAP instantiation for each P(X1, X2) consistent with the above constraints, given a maximum perturbation level of ϵ = .06 for each parameter. Straightforward calculations show that this is true if only one parameter is perturbed at each time. The state (¬x1, x2) remains the most probable even if two parameters are perturbed (for any pair of them). The situation is different if the perturbation level ϵ = .06 is applied to all three parameters simultaneously. There is a specification of the parameters consistent with the perturbations and such that the MAP instantiation is (x1, ¬x2) and achieves probability .4386, corresponding to P(x1) = .51, P(x2|x1) = .14, and P(x2|¬x1) = .84. The minimum perturbation level for which this behaviour is observed is ϵ∗= .05. For this value, there is a single specification of the model for which (x1, ¬x2) has the same probability as (¬x1, x2), which—for this value—is the single most probable instantiation for any other specification of the model that is consistent with the perturbations. The above example can be regarded as a qualitative SA for which the local approach is unable to identify a lack of robustness in the MAP solution, which is revealed instead by the global analysis. In the rest of the paper we develop an algorithm to efficiently detect the minimum perturbation level ϵ∗leading to a different MAP solution. The time complexity of the algorithm is equal to that of the MAP inference in the PGM times the number of variables in the domain, that is, exponential in the treewidth of the graph in the worst case. The approach can be specialized to local SA or any other choice of parameters to perform SA, thus reproducing and extending existing results. The paper is organized as follows: the problem of checking the robustness of a MAP inference is introduced in its general formulation in Section 2. The discussion is then specialized to the case of PGMs in Section 3 and applied to global SA in Section 4. Experiments with real data sets are reported in Section 5, while conclusions and outlooks are given in Section 6. 2 MAP Inference and its Robustness We start by explaining how we intend SA for MAP inference and how this problem can be translated into an optimisation problem very similar to that used for the computation of MAP itself. For the sake of readibility, but without any lack of generality, we begin by considering a single variable only; the multivariate and the conditional cases are dicussed in Section 3. Consider a single variable X taking its values in a finite set Val(X). Given a probability mass function P over X, ˜x ∈Val(X) is said to be a MAP instantiation for P if ˜x ∈arg max x∈Val(X) P(x), (1) 2 which means that ˜x is the most likely value of X according to P. In principle a mass function P can have multiple (equally probable) MAP instantiations. However, in practice there will often be only one, and we then call it the unique MAP instantiation for P. As we did in Example 1, SA can be achieved by modeling perturbations of the parameters in terms of (linear) constraints over them, which are used to define the set of all perturbed models whose mass function is consistent with these constraints. Generally speaking, we consider an arbitrary set P of candidate mass functions, one of which is the original unperturbed mass function P. The only imposed restriction is that P must be compact. This way of defining candidate models establishes a link between SA and the theory of imprecise probability, which extends the Bayesian theory of probability to cope with compact (and often convex) sets of mass functions [19]. For the MAP inference in Eq. (1), performing SA with respect to a set of candidate models P requires the identification of the instantiations that are MAP for at least one perturbed mass function, that is, Val∗(X) := ˜x ∈Val(X) ∃P ′ ∈P : ˜x ∈arg max x∈Val(X) P ′(x) . (2) These instantiations are called E-admissible [15]. If the above set contains only a single MAP instantiation ˜x (which is then necessarily the unique solution of Eq. (1) as well), then we say that the model P is robust with respect to the perturbation P. Example 2. Let X take values in Val(X) := {a, b, c, d}. Consider a perturbation P := {P1, P2} that contains only two candidate mass functions over X. Let P1 be defined by P1(a) = .5, P1(b) = P1(c) = .2 and P1(d) = .1 and let P2 be defined by P2(b) = .35, P2(a) = P2(c) = .3 and P2(d) = .05. Then a and b are the unique MAP instantiations of P1 and P2, respectively. This implies that Val∗(X) = {a, b} and that neither P1 nor P2 is robust with respect to P. For large domains Val(X), for instance in the multivariate case, evaluating Val∗(X) is a time consuming task that is often intractable. However, if we are not interested in evaluating Val∗(X), but only want to decide whether or not P is robust with respect to the perturbation described by P, more efficient methods can be used. The following theorem establishes how this decision can be reformulated as an optimisation problem that, as we are about to show in Section 3, can be solved efficiently for PGMs. Due to space constraints, the proofs are provided as supplementary material. Theorem 1. Let X be a variable taking values in a finite set Val(X) and let P be a set of candidate mass functions over X. Let ˜x be a MAP instantiation for a mass funtion P ∈P. Then ˜x is the unique MAP instantiation for every P ′ ∈P, that is, Val∗(X) has cardinality one, if and only if min P ′∈P P ′(˜x) > 0 and max x∈Val(X)\{˜x} max P ′∈P P ′(x) P ′(˜x) < 1, (3) where the first inequality should be checked first because if it fails, then the left-hand side of the second inequality is ill-defined. 3 PGMs and Efficient Robustness Verification Let X = (X1, . . . , Xn) be a vector of variables taking values in their respective finite domains Val(X1), . . . , Val(Xn). We will use [n] a shorthand notation for {1, . . . , n}, and similarly for other natural numbers. For every non-empty C ⊆[n], XC is a vector that consists of the variables Xi, i ∈C, that takes values in Val(XC) := ×i∈C Val(Xi). For C = [n] and C = {i}, we obtain X = X[n] and Xi = X{i} as important special cases. A factor φ over a vector XC is a real-valued map on Val(XC). If for all xC ∈XC, φ(xC) ≥0, then φ is said to be nonnegative. Let I1, . . . , Im be a collection of index sets such that I1 ∪· · · ∪Im = [n] and Φ = {φ1, . . . , φm} be a set of nonnegative factors over the vectors XI1, . . . , XIm, respectively. We say that Φ is a PGM if it induces a joint probability mass function PΦ over Val(X), defined by PΦ(x) := 1 ZΦ m Y k=1 φk(xIk) for all x ∈Val(X), (4) where ZΦ := P x∈Val(X) Qm k=1 φk(xIk) is the normalising constant called partition function. Since Val(X) is finite, Φ is a PGM if and only if ZΦ > 0. 3 3.1 MAP and Second Best MAP Inference for PGMs If Φ is a PGM then, by merging Eqs. (1) and (4), we see that ˜x ∈Val(X) is a MAP instantiation for PΦ if and only if m Y k=1 φk(xIk) ≤ m Y k=1 φk(˜xIk) for all x ∈Val(X), where ˜xIk is the unique element of Val(XIk) that is consistent with ˜x, and likewise for xIk and x. Similarly, x(2) ∈Val(X) is said to be a second best MAP instantiation for PΦ if and only if there is a MAP instantiation x(1) for PΦ such that x(1) ̸= x(2) and m Y k=1 φk(xIk) ≤ m Y k=1 φk(x(2) Ik ) for all x ∈Val(X) \ {x(1)}. (5) MAP inference in PGMs is an NP-hard task (see [12] for details). The task can be solved exactly by junction tree algorithms in time exponential in the treewidth of the network’s moral graph. While finding the k-th best instantiation might be an even harder task [13] for general k, the second best MAP instantiation can be found by a sequence of MAP queries: (i) compute a first best MAP instantiation ˜x(1); (ii) for each queried variable Xi, take the original PGM and add an extra factor for Xi that equals 1 minus the indicator of the value that Xi has in ˜x(1), and run the MAP inference; (iii) report the instantiation with highest probability among all these runs. Because the second best has to differ from the first best in at least one Xi (and this is ensured by that extra factor), this procedure is correct and in worst case it spends time equal to a single MAP inference multiplied by the number of variables. Faster approaches to directly compute the second best MAP, without reduction to standard MAP queries, have been also proposed (see [8] for an overview). 3.2 Evaluating the Robustness of MAP Inference With Respect to a Family of PGMs For every k ∈[m], let ψk be a set of nonnegative factors over the vector XIk. Every combination of factors Φ = {φ1, . . . , φm} from the sets ψ1, . . . , ψm, respectively, is called a selection. Let Ψ := ×m k=1ψk be the set consisting of all these selections. If every selection Φ ∈Ψ is a PGM, then Ψ is said to be a family of PGMs. We then denote the corresponding set of distributions by PΨ := {PΦ : Φ ∈Ψ}. In the following theorem, we establish that evaluating the robustness of MAP inference with respect to this set PΨ can be reduced to a second best MAP instantiation problem. Theorem 2. Let X = (X1, . . . , Xn) be a vector of variables taking values in their respective finite domains Val(X1), . . . , Val(Xn), let I1, . . . , Im be a collection of index sets such that I1∪· · ·∪Im = [n] and, for every k ∈[m], let ψk be a compact set of nonnegative factors over XIk such that Ψ = ×m k=1ψk is a family of PGMs. Consider now a PGM Φ ∈Ψ and a MAP instantiation ˜x for PΦ and define, for every k ∈[m] and every xIk ∈Val(XIk): αk := min φ′ k∈ψk φ′ k(˜xk) and βk(xIk) := max φ′ k∈ψk φ′ k(xIk) φ′ k(˜xIk). (6) Then ˜x is the unique MAP instantiation for every P ′ ∈PΨ if and only if (∀k ∈[m]) αk > 0 and m Y k=1 βk(x(2) Ik ) < 1, (RMAP) where x(2) is an arbitrary second best MAP instantiation for the distribution P˜Φ that corresponds to the PGM ˜Φ := {β1, . . . , βm}. The first criterion in (RMAP) should be checked first because βk(x(2) Ik ) is ill-defined if αk = 0. Theorem 2 provides an algorithm to test the robustness of MAP in PGMs. From a computational point of view, checking (RMAP) can be done as described in the previous subsection, apart from the local computations appearing in Eq. (6). These local computations will depend on the particular choice of perturbation. As we will see further on, many natural perturbations induce very efficient local computations (usually because they are related somehow to simple linear or convex programming problems). 4 In most practical situations, some variables XO, with O ⊂[n], are observed and therefore known to be in a given configuration y ∈Val(XO). In this case, the MAP inference for the conditional mass function PΦ(XQ|y) should be considered, where XQ := X[n]\O are the queried variables. While we have avoided the discussion about the conditional case and considered only the MAP inference (and its robustness check) for the whole set of variables of the PGM, the standard technique employed with MRFs of including additional identity functions to encode observations suffices, as the probability of the observation (and therefore also the partition function value) does not influence the result of MAP inferences. Hence, one can run the MAP inference for the PGM Φ′ augmented with local identity functions that yield y, such that ZΦ′PΦ′(XQ) = ZΦPΦ(XQ, y) (that is, the unnormalized probabilities are equal, so MAP instantiations are equal too) and hence the very same techniques explained for the unconditional case are applicable to conditional MAP inference (and its robustness check) as well. 4 Global SA in PGMs The most natural way to perform global SA in a PGM Φ = {φ1, . . . , φm} is by perturbing all its factors. Following the ideas introduced in Section 2 and 3, we model the effect of the perturbation by replacing the factor φk with a compact set ψk of factors, for each k ∈[m]. This induces a family Ψ of PGMs. The condition (RMAP) can be therefore used to decide whether or not the MAP instantiation for PΦ is the unique MAP instantiation for every P ′ ∈PΨ. In other words, we have an algorithm to test the robustness of PΦ with respect to the perturbation PΨ. To characterize the perturbation level we introduce the notion of a parametrized perturbation ψϵ k of a factor φk, defined by requiring that: (i) for each ϵ ∈[0, 1], ψϵ k is a compact set of factors, each of which has the same domain as φk; (ii) if ϵ2 ≥ϵ1, then ψϵ2 k ⊇ψϵ1 k ; and (iii) ψ0 k = {φk}. Given a parametrized perturbation for each factor of the PGM Φ, we denote by Ψϵ the corresponding family of PGMs and by PΨϵ the relative set of joint mass functions. We define the critical perturbation threshold ϵ∗as the supremum value of ϵ ∈[0, 1] such that PΦϵ is robust with respect to the perturbation PΨϵ, i.e., such that the condition (RMAP) is still satisfied. Because of the property (ii) of parametrized perturbations, we know that if (RMAP) is not satisfied for a particular value of ϵ then it cannot be satisfied for a larger value and, vice versa, if the criterion is satisfied for a particular value than it will also be satisfied for every smaller value. An algorithm to evaluate ϵ∗can therefore be obtained by iteratively checking (RMAP) according to a bracketing scheme (e.g., bisection) over ϵ. Local SA, as well as SA of only a selective collection of parameters, come as a byproduct, as one can perturb only some factors and our results and algorithm still apply. 4.1 Global SA in Markov Random Fields (MRFs) MRFs are PGMs based on undirected graphs. The factors are associated to cliques of the graph. The specialization of the technique outlined by Theorem 2 is straightforward. A possible perturbation technique is the rectangular one. Given a factor φk, its rectangular parametric perturbation ψϵ k is: ψϵ k = {φ′ k ≥0 : |φ′ k(xIk) −φk(xIk)| ≤ϵ∆for all xIk ∈Val(XIk)} , (7) where ∆> 0 is a chosen maximum perturbation level, achieved for ϵ = 1. For this kind of perturbation, the optimization in Eq. (6) is trivial: αk = max{0, φk(˜xk) −ϵ∆} and, if αk > 0, then βk(˜xIk) = 1 and, for all xIk ∈Val(XIk) \ {˜xIk}, βk(xIk) = φk(xIk )+ϵ∆ φk(˜xIk )−ϵ∆. If αk = 0, even for a single k, the criterion (RMAP) is not satisfied and βk should not be computed. 4.2 Global SA in Bayesian Networks (BNs) BNs are PGMs based on directed graphs. The factors are CPTs, one for each variable, each conditioned on the parents of the variable. Each CPT contains a conditional mass function for each joint state of the parents. Perturbations in BNs can take this into consideration and use perturbations with a direct probabilistic interpretation. Consider an unconditional mass function P over X. A parametrized perturbation Pϵ of P can be achieved by ϵ-contamination [2]: Pϵ := {(1 −ϵ)P(X) + ϵP ∗(X) : P ∗(X) any mass function on X}. (8) 5 It is a trivial exercise to check that this is a proper parametric perturbation of P(X) and that P1 is the whole probabilistic simplex. We perturb the CPTs of a BN by applying this parametric perturbation to every conditional mass function. Let P(X|Y) =: ψ(X, Y) be a CPT. The optimization in Eq. (6) is trivial also in this case. We have αk = (1−ϵ)P(˜x|˜y) and, if αk > 0, then βk(˜xIk) = 1 and, for all xIk ∈Val(XIk)\{˜xIk}, βk(xIk) = (1−ϵ)P (x|y)+ϵ (1−ϵ)P (˜x|˜y) , where ˜x and ˜y are consistent with ˜xIk and similarly for x, y and xIk. More general perturbations can also be considered, and the efficiency of their computation relates to the optimization in Eq. (6). Because of that, we are sure that at least any linear or convex perturbation can be solved efficiently and in polynomial time by convex programming methods, while other more sophisticated perturbations might demand general non-linear optimization and hence cannot anymore ensure that computations are exact and quick. 5 Experiments 5.1 Facial Action Unit Recognition We consider the problem of recognizing facial action units from real image data using the CK+ data set [10, 16]. Based on the Facial Action Coding System [9], facial behaviors can be decomposed into a set of 45 action units (AUs), which are related to contractions of specific sets of facial muscles. We work with 23 recurrent AUs (for a complete description, see [9]). Some AUs happen together to show a meaningful facial expression: AU6 (cheek raiser) tends to occur together with AU12 (lip corner puller) when someone is smiling. On the other hand, some AUs may be mutually exclusive: AU25 (lips part) never happens simultaneously with AU24 (lip presser) since they are activated by the same muscles but with opposite motions. The data set contains 68 landmark positions (given by coordinates x and y) of the face of 589 posed individuals (after filtering out cases with missing data), as well as the labels for the AUs. Our goal is to predict all the AUs happening in a given image. In this work, we do not aim to outperform other methods designed for this particular task, but to analyse the robustness of a model when applied in this context. In spite of that, we expected to obtain a reasonably good accuracy by using an MRF. One third of the posed faces are selected for testing, and two thirds for training the model. The labels of the testing data are not available during training and are used only to compute the accuracy of the predictions. Using the training data and following the ideas in [16], we build a linear support vector machine (SVM) separately for each one of the 23 AUs, using the image landmarks to predict that given AU. With these SVMs, we create new variables o1,. . ., o45, one for each selected AU, containing the predicted value from the SVM. This is performed for all the data, including training and testing data. After that, landmarks are discarded and the data is considered to have 46 variables (true values and SVM predicted ones). At this point, the accuracy of the SVM measurements on the testing data, if one considers the average Hamming distance between the vector of 23 true values and the vector of 23 predicted ones (that is, the sum of the number of times AUi equals oi over all i and all instances in the testing data divided by 23 times the number of instances), is about 87%. We now use these 46 variables to build an MRF (we use a very simplistic penalized likelihood approach for learning the MRF, as the goal is not to obtain state-of-the-art classification but to analyse robustness), as shown in Fig. 1(a), where SVM-built variables are treated as observational/measurement nodes and relations are learned between the AUs (non displayed AU variables in the figure are only connected to their corresponding measurements). Using the MRF, we predict the AU configuration using a MAP algorithm, where all AUs are queried and all measurement nodes are observed. As before, we characterise the accuracy of this model by the average Hamming distance between predicted vectors and true vectors, obtaining about 89% accuracy. That is, the inclusion of the relations between AUs by means of the MRF was able to slightly improve the accuracy obtained independently for each AU from the SVM. For our present purposes, we are however more interested in the associated perturbation thresholds ϵ∗. For each instance of the testing data (that is, for each vector of 23 measurements), we compute it using the rectangular perturbations of Section 4.1. The higher ϵ∗is, the more robust is the issued vector, because it represents the single optimal MAP instantiation even if one varied all the parameters of the MRF by ϵ∗. To understand the relation between ϵ∗and the accuracy of predictions, we have split the testing instances into bins, according to the Hamming distance between true and predicted 6 (a) MRF used in the computations. 0.000 0.005 0.010 0.015 0.020 0.025 0.030 0 1 2 3 4 (b) Robustness split by Hamming distances. Figure 1: On the left, the graph of the MRF used to compute MAP. On the right, boxplots for the robustness measure ϵ∗of MAP solutions, for different values of the Hamming distance to the truth. vectors. Figure 1(b) shows the boxplot of ϵ∗for each value of the Hamming distance between 0 and 4 (lower ϵ∗of a MAP instantiation means lower robustness). As we can see in the figure, the median robustness ϵ∗decreases monotonically with the distance, indicating that this measure is correlated with the accuracy of the issued predictions, and hence can be used as a second order information about the obtained MAP instantiation for each instance. The data set also contains information about the emotion expressed in the posed faces (at least for part of the images), which are shown in Figure 2(b): anger, disgust, fear, happy, sadness and surprise. We have partitioned the testing data according to these six emotions and plotted the robustness measure ϵ∗of them (Figure 2(a)). It is interesting to see the relation between robustness and emotions. Arguably, it is much easier to identify surprise (because of the stretched face and open mouth) than anger (because of the more restricted muscle movements defining it). Figure 2 corroborates with this statement, and suggests that the robustness measure ϵ∗can have further applications. 0.000 0.005 0.010 0.015 0.020 0.025 0.030 anger disgust fear happy sadness surprise (a) Robustness split by emotions. (b) Examples of emotions. Figure 2: On the left, box plots for the robustness measure ϵ∗of the MAP solutions, split according to the emotion that was presented in the instance were MAP was computed. On the right, examples of emotions encoded in the data set [10, 16]. Each row is a different emotion. 7 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.4 0.6 0.8 1 ϵ∗ Accuracy audiology autos breast-cancer horse-colic german-credit pima-diabetes hypothyroid ionosphere lymphography mfeat optdigits segment solar-flare sonar soybean sponge zoo vowel Figure 3: Average accuracy of a classifier over 10 times 5-fold cross-validation. Each instance is classified by a MAP inference. Instances are categorized by their ϵ∗, which indicates their robustness (or amount of perturbation up to which the MAP instantiation remains unique). 5.2 Robustness of Classification In this second experiment, we turn our attention to the classification problem using data sets from the UCI machine learning repository [1]. Data sets with many different characteristics have been used. Continuous variables have been discretized by their median before any other use of the data. Our empirical results are obtained out of 10 runs of 5-fold cross-validation (each run splits the data into folds randomly and in a stratified way), so the learning procedure of each classifier is called 50 times per data set. In all tests we have employed a Naive Bayes classifier with equivalent sample size equal to one. After the classifier is learned using 4 out of 5 folds, predictions for the other fold are issued based on the MAP solution, and the computation of the robustness measure ϵ∗is done. Here, the value ϵ∗is related to the size of the contamination of the model for which the classification result of a given test instance remains unique and unchanged (as described in Section 4.2). Figure 3 shows the classification accuracy for varying values of ϵ∗that were used to perturb the model (in order to obtain the curves, the technicality was to split the test instances into bins according to the computed value ϵ∗, using intervals of length 10−2, that is, accuracy was calculated for every instance with ϵ∗ between 0 and 0.01, then between 0.01 and 0.02, and so on). We can see a clear relation between accuracy and predicted robustness ϵ∗. We remind that the computation of ϵ∗does not depend on the true MAP instantiation, which is only used to verify the accuracy. Again, the robustness measure provides a valuable information about the quality of the obtained MAP results. 6 Conclusions We consider the sensitivity of the MAP instantiations of discrete PGMs with respect to perturbations of the parameters. Simultaneous perturbations of all the parameters (or any chosen subset of them) are allowed. An exact algorithm to check the robustness of the MAP instantiation with respect to the perturbations is derived. The worst-case time complexity is that of the original MAP inference times the number of variables in the domain. The algorithm is used to compute a robustness measure, related to changes in the MAP instantiation, which is applied to the prediction of facial action units and to classification problems. A strong association between that measure and accuracy is verified. As future work, we want to develop efficient algorithms to determine, if the result is not robust, what defines such instances and how this robustness can be used to improve classification accuracy. Acknowledgements J. De Bock is a PhD Fellow of the Research Foundation Flanders (FWO) and he wishes to acknowledge its financial support. The work of C. P. de Campos has been mostly performed while he was with IDSIA and has been partially supported by the Swiss NSF grant 200021 146606 / 1. 8 References [1] A. Asuncion and D.J. Newman. UCI machine learning repository. http://www.ics.uci.edu/∼mlearn/MLRepository.html, 2007. [2] J. Berger. Statistical decision theory and Bayesian analysis. Springer Series in Statistics. Springer, New York, NY, 1985. [3] E.F. Castillo, J.M. Gutierrez, and A.S. Hadi. Sensitivity analysis in discrete Bayesian networks. IEEE Transactions on Systems, Man, and Cybernetics, Part A, 27(4):412–423, 1997. [4] H. Chan and A. Darwiche. When do numbers really matter? Journal of Artificial Intelligence Research, 17:265–287, 2002. [5] H. Chan and A. Darwiche. Sensitivity analysis in Bayesian networks: from single to multiple parameters. In Proceedings of UAI 2004, pages 67–75, 2004. [6] H. Chan and A. Darwiche. Sensitivity analysis in Markov networks. In Proceedings of IJCAI 2005, pages 1300–1305, 2005. [7] H. Chan and A. Darwiche. On the robustness of most probable explanations. In Proceedings of UAI 2006, pages 63–71, 2006. [8] R. Dechter, N. Flerova, and R. Marinescu. Search algorithms for m best solutions for graphical models. In Proceedings of AAAI 2012, 2012. [9] P. Ekman and W. V. Friesen. Facial action coding system: A technique for the measurement of facial movement. Consulting Psychologists Press, Palo Alto, CA, 1978. [10] T. Kanade, J. F. Cohn, and Y. Tian. Comprehensive database for facial expression analysis. In Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition, pages 46–53, Grenoble, 2000. [11] U. Kjaerulff and L.C. van der Gaag. Making sensitivity analysis computationally efficient. In Proceedings of UAI 2000, pages 317–325, 2000. [12] J. Kwisthout. Most probable explanations in Bayesian networks: complexity and tractability. International Journal of Approximate Reasoning, 52(9):1452–1469, 2011. [13] J. Kwisthout, H. L. Bodlaender, and L. C. van der Gaag. The complexity of finding k-th most probable explanations in probabilistic networks. In Proceedings of SOFSEM 2011, pages 356– 367, 2011. [14] K. B. Laskey. Sensitivity analysis for probability assessments in Bayesian networks. IEEE Transactions on Systems, Man, and Cybernetics, 25(6):901–909, 1995. [15] I. Levi. The Enterprise of Knowledge. MIT Press, London, 1980. [16] P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews. The Extended Cohn-Kanade Dataset (CK+): A complete expression dataset for action unit and emotionspecified expression. In Proceedings of the Third International Workshop on CVPR for Human Communicative Behavior Analysis, pages 94–101, San Francisco, 2010. [17] M. Pradhan, M. Henrion, G.M. Provan, B.D. Favero, and K. Huang. The sensitivity of belief networks to imprecise probabilities: an experimental investigation. Artificial Intelligence, 85(1-2):363–397, 1996. [18] S. Renooij and L.C. van der Gaag. Evidence and scenario sensitivities in naive Bayesian classifiers. International Journal of Approximate Reasoning, 49(2):398–416, 2008. [19] P. Walley. Statistical Reasoning with Imprecise Probabilities. Chapman and Hall, London, 1991. 9
|
2014
|
13
|
5,214
|
Deterministic Symmetric Positive Semidefinite Matrix Completion William E. Bishop1,2, Byron M. Yu2,3,4 1Machine Learning, 2Center for the Neural Basis of Cognition, 3Biomedical Engineering, 4Electrical and Computer Engineering Carnegie Mellon University {wbishop, byronyu}@cmu.edu Abstract We consider the problem of recovering a symmetric, positive semidefinite (SPSD) matrix from a subset of its entries, possibly corrupted by noise. In contrast to previous matrix recovery work, we drop the assumption of a random sampling of entries in favor of a deterministic sampling of principal submatrices of the matrix. We develop a set of sufficient conditions for the recovery of a SPSD matrix from a set of its principal submatrices, present necessity results based on this set of conditions and develop an algorithm that can exactly recover a matrix when these conditions are met. The proposed algorithm is naturally generalized to the problem of noisy matrix recovery, and we provide a worst-case bound on reconstruction error for this scenario. Finally, we demonstrate the algorithm’s utility on noiseless and noisy simulated datasets. 1 Introduction There are multiple scenarios where we might wish to reconstruct a symmetric positive semidefinite (SPSD) matrix from a sampling of its entries. In multidimensional scaling, for example, pairwise distance measurements are used to form a kernel matrix and PCA is performed on this matrix to embed the data in a low-dimensional subspace. However, due to constraints, it may not be possible to measure pairwise distances for all variables, rendering the kernel matrix incomplete. In neuroscience a population of neurons is often modeled as driven by a low-dimensional latent state [1], producing a low-rank covariance structure in the observed neural recordings. However, with current technology, it may only be possible to record from a large population of neurons in small, overlapping sets [2,3], leaving holes in the empirical covariance matrix. More generally, SPSD matrices in the form of Gram matrices play a key role in a broad range of machine learning problems such as support vector machines [4], Gaussian processes [5] and nonlinear dimensionality reduction techniques [6] and the reconstruction of such matrices from a subset of their entries is of general interest. In real world scenarios, the constraints that make it difficult to observe a whole matrix often also constrain which particular entries of a matrix are observable. In such settings, existing matrix completion results, which assume matrix entries are revealed in an unstructured, random manner [7–14] or the ability to finely query individual entries of a matrix in an adaptive manner [15, 16] might not be applicable. This motivates us to examine the problem of recovering a SPSD matrix from a given, deterministic set of its entries. In particular we focus on reconstructing a SPSD matrix from a revealed set of its principal submatrices. Recall that a principal submatrix of a matrix is a submatrix obtained by symmetrically removing rows and columns of the original matrix. When individual entries of a matrix are formed by pairwise measurements between experimental variables, principal submatrices are a natural way to formally capture how entries are revealed. 1 A B Figure 1: (A) An example A matrix with two principal submatrices, showing the correspondence between A(ρl, ρl) and C(ρl, :). (B) Mapping of C1 and C2 to C, illustrating the role of ιl, φl and ηl. Sampling principal submatrices also allows for an intuitive method of matrix reconstruction. As shown in Fig. 1, any n × n rank r SPSD matrix A can be decomposed as A = CCT for some C ∈Rn×r. Any principal submatrix of A can also be decomposed in the same way. Further, if ρi is an ordered set indexing the the ith principal submatrix of A, it must be that A(ρi, ρi) = C(ρi, :)C(ρi, :)T .1 This suggests we can decompose each A(ρi, ρi) to learn the the rows of C and then reconstruct A from the learned C, but there is one complication. Any matrix, C(ρi, :), such that A(ρi, ρi) = C(ρi, :)C(ρi, :)T , is only defined up to an orthonormal transformation. The na¨ıve algorithm just suggested has no way of ensuring the rows of C learned from two different principal submatrices are consistent with respect to this degeneracy. Fortunately, the situation is easily remedied if the principal submatrices in question have some overlap, so that the C(ρi, :) matrices have some rows that map to each other. Under appropriate conditions explored below, we can learn unique orthonormal transformations rendering these rows equal, allowing us to align the C(ρi, :) matrices to learn a proper C. Contributions In this paper, we make the following contributions. 1. We prove sufficient conditions, which are also necessary in certain situations, for the exact recovery of a SPSD matrix from a given set of its principal submatrices. 2. We present a novel algorithm which exactly recovers a SPSD matrix when the sufficient conditions are met. 3. The algorithm is generalized when the set of observed principal submatrices of a matrix are corrupted by noise. We present a theorem guaranteeing a bound on reconstruction error. 1.1 Related Work The low rank matrix completion problem has received considerable attention since the work of Cand`es and Recht [17] who demonstrated that a simple convex problem could exactly recover many low-rank matrices with high probability. This work, as did much of what followed (e.g., [7–9]), made three key assumptions. First, entries of a matrix were assumed to be uncorrupted by noise and, second, revealed in a random, unstructured manner. Finally, requirements, such as incoherence, were also imposed to rule out matrices with most of their mass concentrated in a only a few entries. These assumptions have been reexamined and relaxed in additional work. The case of noisy observed entries has been considered in [10–14]. Others have reduced or removed the requirements for incoherence by using iterative, adaptive sampling schemes [15,16]. Finally, recent work [18,19] has considered the case of matrix recovery when entries are selected a deterministic manner. 1Throughout this work we will use MATLAB indexing notation, so C(ρi, :) is the submatrix of C made up of the rows indexed by the ordered set ρi. 2 Our work considerably differs from this earlier work. Our applications of interest allow us to assume much structure, i.e., that matrices are SPSD, which our algorithm exploits, and our sufficient conditions make no appeal to incoherence. Our work also differs from previous results for deterministic sampling schemes (e.g., [18,19]), which do not consider noise nor provide sufficient conditions for exact recovery, instead approaching the problem as one of matrix approximation. Previous work has also considered the problem of completing SPSD matrices of any [20] or low rank [21,22]. Our work to identify conditions for a unique completion of a given rank can be viewed as a continuation of this work where our sufficient and necessary conditions can be understood in a particularly intuitive manner due to our sampling scheme. Finally, the Nystr¨om method [23] is a well known technique for approximating a SPSD matrix as low rank. It can also be applied to the matrix recovery problem, and in the noiseless case, sufficient conditions for exact recovery are known [24]. However, the Nystr¨om method requires sampling full columns and rows of the original matrix, a sampling scheme which may not be possible in many of our applications of interest. 2 Preliminaries 2.1 Deterministic Sampling for SPSD Matrices We denote the set of index pairs for the revealed entries of a matrix by Ω. Formally, an index pair, (i, j), is in Ωif and only if we observe the corresponding entry of an n × n matrix so that Ω⊂[n] × [n].2 In this work, we assume Ωindexes a set of principal submatrices of a matrix. Let Ωl ⊆Ωindicate a subset of Ω. If Ωl indexes a principal submatrix of a matrix, it can be compactly described by the unique set of row (or equivalently column) indices it contains. Let ρ{Ωl} = {i|(i, j) ∈Ωl} be the set of row indices contained in Ωl. For compactness, let ρl = ρ{Ωl}. Finally, let | · | indicate cardinality. Then, for an n × n matrix, A, of rank r we make the following assumptions on Ω. (A1) ρ{Ω} = [n]. (A2) There exists a collection Ω1, . . . , Ωk of subsets of Ωsuch that Ω= ∪k l=1Ωl, and for each Ωl, (i, i) ∈Ωl and (j, j) ∈Ωl if and only if (i, j) ∈Ωl and (j, i) ∈Ωl. (A3) There exists a collection Ω1, . . . , Ωk of subsets of Ωsuch that A2 holds and if k > 1, there exists an ordering τ1, . . . , τk such that for all i ≥2, |ρτi ∩ ∪i−1 j=1ρτj | ≥r. The first assumption ensures Ωindexes at least one entry for each row of A. Assumption A2 requires that Ωindexes a collection of principal submatrices of A, and A3 allows for the possible alignment of rows of C (recall, A = CCT ) estimated from each principal submatrix. 2.2 Additional Notation Denote the set of real, n × n SPSD matrices by Sn +, and let A ∈Sn + be the rank r matrix to be recovered. For the noisy case, ˜A will indicate a perturbed version of A. We will use the notation Al to indicate the principal submatrix of a matrix A indexed by Ωl. Denote the eigendecomposition of A as A = EΛET for the diagonal matrix Λ ∈Rr×r containing the non-zero eigenvalues of A, λ1 ≥. . . ≥λr, along its diagonal and the matrix En×r containing the corresponding eigenvectors of A in its columns. Let nl denote the size of Al and rl the rank. Because Al is a principal submatrix of A, it follows that Al ∈Snl + . Denote the eigendecomposition of each Al as Al = ElΛlET l for the matrices Λl ∈Rrl×rl and El ∈Rnl,rl. We add tildes to the appropriate symbols for the eigendecomposition of ˜A and its principal submatrices. Finally, let ιl = ρτl ∩(∪j=1,...,l−1 ρτj) be the intersection of the indices for the lth principal submatrix with the indices of the all of the principal submatrices ordered before it. Let Cl be a matrix such that ClCT l = Al. If Al is a principal submatrix of A there will exist some Cl such that C(ρl, :) = Cl. For such a Cl, let φl be an index set that assigns the rows of the matrix C(ιl, :) to their location in Cl, so that C(ιl, :) = Cl(φl, :) and let ηl assign the rows of C(ρl \ ιl, :) to their 2We use the notation, [n] to indicate the set {1, . . . , n}. 3 Algorithm 1 SPSD Matrix Recovery (r, { ˜El, ˜Λl, τl, ρl, φl, ιl, ηl}k l=1) Initialize ˆC as a n × r matrix. 1. ˆC(ρτ1, :) ←˜Eτ1(:, 1 : r)˜Λ1/2 τ1 (1 : r, 1 : r) 2. For l ∈{2, . . . , k} (a) ˆCl ←˜Eτl(:, 1 : r)˜Λ1/2 τl (1 : r, 1 : r) (b) ˆWl ←argminW W T =I || ˆC(ιl, :) −ˆCl(φl, :)W||2 F (c) ˆC(ρτl \ ιl, :) ←ˆCl(ηl, :) ˆWl 3. Return ˆA = ˆC ˆCT location in Cl, so that C(ρl \ ιl, :) = Cl(ηl, :). The role of ρl, ιl, ηl and φl is illustrated for the case of two principal submatrices with τ1 = 1, τ2 = 2 in Figure 1. 3 The Algorithm Before establishing a set of sufficient conditions for exact matrix completion, we present our algorithm. Except for minor notational differences, the algorithms for the noiseless and noisy matrix recovery scenarios are identical, and for brevity we present the algorithm for the noisy scenario. Let Ωsample the observed entires of ˜A so that A1 through A3 hold. Assume each perturbed principal submatrix, ˜Al, indexed by Ωis SPSD and of rank r or greater. These assumptions on each ˜Al will be further explored in section 5. Decompose each ˜Al as ˜Al = ˜El ˜Λl ˜ET l , and form a rank r matrix ˆCl as ˆCl = ˜El(:, 1 : r)˜Λ1/2 l (1 : r, 1 : r). The rows of the ˆCl matrices contain estimates for the rows of C such that A = CCT , though rows estimated from different principal submatrices may be expressed with respect to different orthonormal transformations. Without loss of generality, assume the principal submatrices are labeled so that τ1 = 1, . . . , τk = k. Our algorithm begins to construct ˆC by estimating ˆC(ρ1, :) = ˆC1. In this step, we also implicitly choose to express ˆC with respect to the basis for ˆC1. We then iteratively add rows to ˆC, for each ˆCl adding the rows ˆCl(ηl, :) to ˆC. To estimate the orthornormal transformation to align the rows of ˆCl with the rows of ˆC estimated in previous iterations, we solve the following optimization problem ˆWl = argmin W W T =I ˆC(ιl, :) −ˆCl(φl, :)W 2 F . (1) In words, equation 1 estimates ˆWl so that the rows of ˆCl which overlap with the previously estimated rows of ˆC match as closely as possible. In the noiseless case, (1) is equivalent to ˆWl = W : ˆC(ιi, :) −ˆCl(φi, :)W = 0. Equation 1 is known as the Procrustes problem and is non-convex, but its solution can be found in closed form and sufficient conditions for its unique solution are known [25]. After learning ˆWl for each ˆCl, we build up the estimate for ˆC by setting ˆC(ρl \ ιl, :) = ˆCl(ηl, :) ˆWl. This step adds the rows of ˆCl that do not overlap with those already added to ˆC to the growing estimate of ˆC. If we process principal submatrices in the order specified by A3, this algorithm will generate a complete estimate for ˆC. The full matrix ˆA can then be estimated as ˆA = ˆC ˆC. The pseudocode for this algorithm is given in Algorithm 1. 4 The Noiseless Case We begin this section by stating one additional assumption on A. 4 (A4) There exists a collection Ω1, . . . , Ωk of subsets of Ωsuch that A2 holds and if k > 1, there exists an ordering τ1, . . . , τk such that the rank of A(ιl, ιl) is equal to r for each l ∈{2, . . . , k}. In Theorem 2 we show that A1 - A4 are sufficient to guarantee the exact recovery of A. Conditions A1 - A4 can also be necessary for the unique recovery of A by any method, as we show next in Theorem 1. Theorem 1 may at first glance appear quite simple, but it is a restatement of Lemma 6 in the appendix, from which more general necessity results can be derived. Specifically, Corollary 7 in the appendix can be used to establish the above conditions are necessary to recover A from a set of its principal submatrices which can be aligned in a overlapping sequence (e.g., submatrices running down the diagonal of A), which might be encountered when constructing a covariance matrix from sequentially sampled subgroups of variables. Corollary 8 establishes a similar result when there exists a set of principal submatrices which have no overlap among themselves but all overlap with one other submatrix not in the set, and Corollary 9 establishes that it is sufficient to find just one principal submatrix that obeys certain conditions with respect to the rest of the sampled entries of the matrix to certify the impossibility of matrix completion. This last corollary in fact applies even when the rest of the sampled entries do not fall into a union of principal submatrices of the matrix. Theorem 1. Let Ω̸= [n] × [n] index A so that A2 holds for some Ω1 ⊆Ωand Ω2 ⊆Ω. Then A1, A3 and A4 must hold with respect to Ω1 and Ω2 for A to be recoverable by any method. The proof can be found in the appendix. Here we briefly provide the intuition. Key to understanding the proof is recognizing that recovering A from the set of entries indexed by Ωis equivalent to learning a matrix C from the same set of entries such that A = CCT . If A1 is not met, a complete row and the corresponding column of A is not sampled, and there is nothing to constrain the estimate for the corresponding row of C. If A3 and A4 are not met, we can construct a C such that all of the entries of the matrices A and CCT indexed by Ωare identical yet A ̸= CCT . We now show that our algorithm can recover A as soon as the above conditions are met, establishing their sufficiency. Theorem 2. Algorithm 1 will exactly recover A from a set of its principal submatrices indexed by Ω1, . . . , Ωk which meets conditions A1 through A4. The proof, which is provided in the appendix, shows that in the noiseless case, for each principal submatrix, Al, of A, step 2a of Algorithm 1 will learn an exact ˆCl such that Al = ˆCl ˆCT l . Further, when assumptions A3 and A4 are met, step 2b will correctly learn the orthonormal transformation to align each ˆCl to the previously added rows of ˆC. Therefore, progressive iterations of step 2 correctly learn more and more rows of a unified ˆC. As the algorithm progresses, all of the rows of ˆC are learned and the entirety of A can be recovered in step 3 of the algorithm. It is instructive to ask what we have gained or lost by constraining ourselves to sampling principal submatrices. In particular, we can ask how many individual entries must be observed before we can recover a matrix. A SPSD matrix has at least nr degrees of freedom, and we would not expect any matrix recovery method to succeed before at least this many entries of the original matrix are revealed. The next theorem establishes that our sampling scheme is not necessarily wasteful with respect to this bound. Theorem 3. For any rank r ≥1 matrix A ∈Sn + there exists a Ωsuch that A1 −A3 hold and |Ω| ≤n(2r + 1). Of course, this work is motivated by real-world scenarios where we are not at the liberty to finely select the principal submatrices we sample, and in practice we may often have to settle for a set of principal submatrices which sample more of the matrix. However, it is reassuring to know that our sampling scheme does not necessarily require a wasteful number of samples. We note that assumptions A1 through A4 have an important benefit with respect to a requirement of incoherence. Incoherence is an assumption about the entire row and column space of a matrix and cannot be verified to hold with only the observed entries of a matrix. However, assumptions A1 through A4 can be verified to hold for a matrix of known rank using its observed entries. Thus, it is possible to verify that these assumptions hold for a given Ωand A and provide a certificate guaranteeing exact recovery before matrix completion is attempted. 5 5 The Noisy Case We analyze the behavior of Algorithm 1 in the presence of noise. For simplicity, we assume each observed, noise corrupted principal submatrix is SPSD so that the eigendecompositions in steps 1 and 2a of the algorithm are well defined. In the noiseless case, to guarantee the uniqueness of ˆA, A4 required each A(ιl, ιl) to be of rank r. In the noisy case, we place a similar requirement on ˜A(ιl, ιl), where we recognize that the rank of each ˜A(ιl, ιl) may be larger than r due to noise. (A5) There exists a collection Ω1, . . . , Ωk of subsets of Ωsuch that A2 holds and if k > 1, there exists an ordering τ1, . . . , τk such that the rank of ˜A(ιl, ιl) is greater than or equal to r for each l ∈{2, . . . , k}. (A6) There exists a collection Ω1, . . . , Ωk of subsets of Ωsuch that A2 holds and ˜Al ∈Snl + for each l ∈{1, . . . , k}. In practice, any ˜Al which is not SPSD can be decomposed into the sum of a symmetric and an antisymmetric matrix. The negative eigenvalues of the symmetric matrix can then be set to zero, rendering a SPSD matrix. As long as this resulting matrix meets the rank requirement in A5, it can be used in place of ˜Al. Our algorithm can then be used without modification to estimate ˆA. Theorem 4. Let Ωindex an n×n matrix ˜A which is a perturbed version of the rank r matrix A such that A1 −A6 simultaneously hold for a collection of principal submatrices indexed by Ω1, . . . , Ωk. Let b ≥maxl∈[k] ||Cl||F for some Cl ∈Rnl×r such that Al = ClCT l , ζ ≥λl,1, and δ ≤min{mini∈[r−1], |λl,i −λl,i+1|, λl,r}. Assume ||Al −˜Al||F ≤ϵ for all l for some ϵ < min{b2/r, δ/2, 1}. Then if in step 2 of Algorithm 1, rank n ˆCl(φl, :)T ˆC(ιl, :) o = r for all l ≥2, Algorithm 1 will estimate an ˆA from the set of principal submatrices of ˜A indexed by Ωsuch that A −ˆA F ≤2Gk−1L||C||F √rϵ + G2k−2L2rϵ, where C ∈Rn×r is some matrix such that A = CCT , G = 4 + 12/v, and v ≤λr(A(ιr, ιr))/b2 for all l and L = q 1 + 16ζ δ2 + 8 √ 2ζ1/2 δ3/2 . The proof is left to the appendix and is accomplished in two parts. In the first part, we guarantee that the ordered eigenvalues and eigenvectors of each ˜Al, which are the basis for estimating each ˆCl, will not be too far from those of the corresponding Al. In the second part, we bound the amount of additional error that can be introduced by learning imperfect ˆW matrices which result in slight misalignments as each ˆCl matrix is incorporated into the final estimate for the complete ˆC. This second part relies on a general perturbation bound for the Procrustes problem, derived as Lemma 16 in the appendix. Our error bound is non-probabilistic and applies in the presence of adversarial noise. While we know of no existing results for the recovery of matrices from deterministic samplings of noise corrupted entries, we can compare our work to bounds obtained for various results applicable to random sampling schemes, (e.g., [10–13]). These results require either incoherence [10,11], boundedness [13] of the entries of the matrix to be recovered or assume the sampling scheme obeys the restricted isometry property [12]. Error is measured with various norms, but in all cases shows a linear dependence on the size of the original perturbation. For this initial analysis, our bound establishes that reconstruction error consistently goes to 0 with perturbation size, and we conjecture that with a refinement of our proof technique we can prove a linear dependence on ϵ. We provide initial evidence for this conjecture in the results below. 6 Simulations We demonstrate our algorithm’s performance on simulated data, starting with the noiseless setting in Fig. 2. Fig. 2A shows three sampling schemes, referred to as masks, that meet assumptions A1 6 Random Mask Block Diagonal Mask Full Columns Mask True Matrix 5 10 15 20 25 F S Rank 5 10 15 20 25 Success Failure Rank = Block Diagonal = Full Column = Random A B 1 55 Rank 0 54 Overlap C Example Deterministic Sampling Schemes for SPSD Matrix Completion Completion Success of the Block Diagonal Sampling Scheme Completion Success with Matrix Rank for Three Sampling Schemes with Figure 2: Noiseless simulation results. (A) Example masks for successful completion of a rank 4 matrix. (B) Completion success as rank is varied for masks with minimal overlap (minl |ιl|) of 10. (C) Completion success for rank 1 −55 matrices with block diagonal masks with minimal overlap ranging between 0 −54. through A3 for a randomly generated 40 × 40 rank 4 matrix. In all of the noiseless simulations, we simulate a rank r matrix A ∈Sn + by first randomly generating a C ∈Rn×r with entries individually drawn from a N(0, 1) distribution and forming A as A = CCT . The block diagonal mask is formed from 5 × 5 principal submatrices running down the diagonal, each principal submatrix overlapping the one to its upper left. Such a mask might be encountered in practice if we obtain pairwise measurements from small sets of variables sequentially. The lth principal submatrix of the full columns mask is formed by sampling all pairs of entires, (i, j) indexed by i, j ∈{1, 2, 3, 4, l+4} and might be encountered when obtaining pairwise measurements between sets of variables, where some small number of variables is present in all sets. The random mask is formed from principal submatrices randomly generated to conform to assumptions A1 through A3 and demonstrates that masks with non-obvious structure in the underlying principal submatrices can conform to assumptions A1 through A3. Algorithm 1 correctly recovers the true matrix from all three masks. In panel Fig. 2B, we modify these three types of masks so that minl |ιl|, the minimal overlap of a principal submatrix with those ordered before it, is 10 for each and attempt to reconstruct random matrices of size 55×55 and increasing rank. Corollaries 7−9 in the appendix, which can be derived from Theorem 1 above, can be applied to these scenarios to establish the necessity that minl |ιl| be greater than r for a rank r matrix. As predicted, for all masks recovery is successful for all matrices of rank 10 or less and unsuccessful for matrices of greater rank. In Fig. 2C, we show this is not unique to masks with minimal overlap of 10. Here we generate block diagonal masks with minimal overlap between the principal submatrices varying between 0 and 54. For each overlap value, we then attempt to recover matrices of rank 1 through o + 1, where o is the minimal overlap value. To guard against false positives, we randomly generated 10 matrices of a specified rank for each mask and only indicated success in black if matrix completion was successful in all cases. As predicted by theory, matrix completion failed exactly when the rank of the underlying matrix exceeded the minimal overlap value of the mask. Identical results were obtained for the full column and random masks. We provide evidence the dependence on ϵ in Theorem 4 should be linear in Fig. 3. We generate random 55 × 55 matrices of rank 1 through 10. Matrices were generated as in the noiseless scenario and normalized to have a Frobenius norm of 1. We use a block diagonal mask with 25×25 blocks and 7 0 1 2 3 4 5 6 x 10 −5 0 5 10 15 20 25 ε ||E||F 1 2 3 4 5 6 7 8 9 10 40 80 120 0 1 2 3 4 5 6 x 10 −5 0 1 2 3 4 5 6x 10 −4 ε ||E||F 2 0 2 4 5 6 3 1 12345678910 A B Rank Noisy Matrix Reconstruction Error Noisy Matrix Reconstruction Error Adjusted for Rank 3 1 2 4 5 6 3 1 Figure 3: Noisy simulation results. (A) Reconstruction error with increasing amounts of noise applied to the original matrix. (B) Traces in panel (A), each divided by its value at ϵ = ϵmin. an overlap of 15 and randomly generate SPSD noise, scaled so that ||Al −˜Al|| = ϵ for each principal submatrix. We sweep through a range of ϵ ∈[ϵmin, ϵmax] for a ϵmin > 0 and a ϵmax determined by the matrix with the tightest constraint on ϵ in theorem 4. Fig. 3A shows that reconstruction error generally increases with ϵ and the rank of the matrix to be recovered. To better visualize the dependence on ϵ, in Fig. 3B, we plot ||A −ˆA||F /||A −ˆA||F,ϵmin, where ||A −ˆA||F,ϵmin indicates the reconstruction error obtained with ϵ = ϵmin. All of the lines coincide, suggesting a linear dependence on ϵ. 7 Discussion In this work we present an algorithm for the recovery of a SPSD matrix from a deterministic sampling of its principal submatrices. We establish sufficient conditions for our algorithm to exactly recover a SPSD matrix and present a set of necessity results demonstrating that our stated conditions can be quite useful for determining when matrix recovery is possible by any method. We also show that our algorithm recovers matrices obscured by noise with increasing fidelity as the magnitude of noise goes to zero. Our algorithm incorporates no tuning parameters and can be computationally light, as the majority of computations concern potentially small principal submatrices of the original matrix. Implementations of the algorithm, which estimate each ˆCl in parallel, are also easy to construct. Additionally, our results can be generalized when the principal submatrices our method uses for reconstruction are themselves not fully observed. In this case, existing matrix recovery techniques can be used to estimate each complete underlying principal submatrix with some bounded error. Our algorithm can then reconstruct the full matrix from these estimated principal submatrices. An open question is the computational complexity of finding a set of principal submatrices which satisfy conditions A1 through A4. However, in many practical situations there is an obvious set of principal submatrices and ordering which satisfy these conditions. For example, in the neuroscience application described in the introduction, a set of recording probes are independently movable and each probe records from a given number of neurons in the brain. Each configuration of the probes corresponds to a block of simultaneously recorded neurons, and by moving the probes one at a time, blocks with overlapping variables can be constructed. When learning a low rank covariance structure for this data, the overlapping blocks of variables naturally define observed blocks of a low rank covariance matrix to use in algorithm 1. Acknowledgements This work was supported by an NDSEG fellowship, NIH grant T90 DA022762, NIH grant R90 DA023426-06 and by the Craig H. Nielsen Foundation. We thank Martin Azizyan, Geoff Gordon, Akshay Krishnamurthy and Aarti Singh for their helpful discussions and Rob Kass for his guidance. 8 References [1] John P Cunningham and Byron M Yu. Dimensionality reduction for large-scale neural recordings. Nature Neuroscience, 17(11):1500–1509, 2014. [2] Srini Turaga, Lars Buesing, Adam M Packer, Henry Dalgleish, Noah Pettit, Michael Hausser, and Jakob Macke. Inferring neural population dynamics from multiple partial recordings of the same neural circuit. In Advances in Neural Information Processing Systems, pages 539–547, 2013. [3] Suraj Keshri, Eftychios Pnevmatikakis, Ari Pakman, Ben Shababo, and Liam Paninski. A shotgun sampling solution for the common input problem in neural connectivity inference. arXiv preprint arXiv:1309.3724, 2013. [4] Bernhard Sch¨olkopf and Alexander J Smola. Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT press, 2002. [5] C.E. Rasmussen and C.K.I. Williams. Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning). The MIT Press, Cambridge, MA, 2006. [6] John A Lee and Michel Verleysen. Nonlinear dimensionality reduction. Springer, 2007. [7] Emmanueal J. Candes and Terence Tao. The power of convex relaxation: Near-optimal matrix completion. Information Theory, IEEE Transactions on, 56(5):2053–2080, May 2010. [8] Raghunandan H Keshavan, Andrea Montanari, and Sewoong Oh. Matrix completion from a few entries. Information Theory, IEEE Transactions on, 56(6):2980–2998, 2010. [9] Benjamin Recht. A simpler approach to matrix completion. The Journal of Machine Learning Research, 12:3413–3430, 2011. [10] Raghunandan H Keshavan, Andrea Montanari, and Sewoong Oh. Matrix completion from noisy entries. Journal of Machine Learning Research, 11(2057-2078):1, 2010. [11] Emmanuel J Candes and Yaniv Plan. Matrix completion with noise. Proceedings of the IEEE, 98(6):925– 936, 2010. [12] Emmanuel J Candes and Yaniv Plan. Tight oracle inequalities for low-rank matrix recovery from a minimal number of noisy random measurements. Information Theory, IEEE Transactions on, 57(4):2342– 2359, 2011. [13] Vladimir Koltchinskii, Karim Lounici, and Alexandre B Tsybakov. Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion. The Annals of Statistics, 39(5):2302–2329, 2011. [14] Sahand Negahban and Martin J Wainwright. Restricted strong convexity and weighted matrix completion: Optimal bounds with noise. The Journal of Machine Learning Research, 13:1665–1697, 2012. [15] Akshay Krishnamurthy and Aarti Singh. Low-rank matrix and tensor completion via adaptive sampling. In C.J.C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 836–844. 2013. [16] Jie Chen, Nannan Cao, Kian Hsiang Low, Ruofei Ouyang, Colin Keng-Yan Tan, and Patrick Jaillet. Parallel gaussian process regression with low-rank covariance matrix approximations. arXiv preprint arXiv:1305.5826, 2013. [17] Emmanuel J Cand`es and Benjamin Recht. Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 9(6):717–772, 2009. [18] Eyal Heiman, Gideon Schechtman, and Adi Shraibman. Deterministic algorithms for matrix completion. Random Structures & Algorithms, 2013. [19] Troy Lee and Adi Shraibman. Matrix completion from any given set of observations. In Advances in Neural Information Processing Systems, pages 1781–1787, 2013. [20] Monique Laurent. Matrix completion problems. Encyclopedia of Optimization, pages 1967–1975, 2009. [21] Monique Laurent and Antonios Varvitsiotis. A new graph parameter related to bounded rank positive semidefinite matrix completions. Mathematical Programming, 145(1-2):291–325, 2014. [22] Monique Laurent and Antonios Varvitsiotis. Positive semidefinite matrix completion, universal rigidity and the strong arnold property. Linear Algebra and its Applications, 452:292–317, 2014. [23] Christopher Williams and Matthias Seeger. Using the nystr¨om method to speed up kernel machines. In Advances in Neural Information Processing Systems 13. Citeseer, 2001. [24] Sanjiv Kumar, Mehryar Mohri, and Ameet Talwalkar. Sampling techniques for the nystrom method. In International Conference on Artificial Intelligence and Statistics, pages 304–311, 2009. [25] Peter H Sch¨onemann. A generalized solution of the orthogonal procrustes problem. Psychometrika, 31(1):1–10, 1966. 9
|
2014
|
130
|
5,215
|
Distributed Parameter Estimation in Probabilistic Graphical Models Yariv D. Mizrahi1 Misha Denil2 Nando de Freitas2,3,4 1University of British Columbia, Canada 2University of Oxford, United Kingdom 3Canadian Institute for Advanced Research 4Google DeepMind yariv@math.ubc.ca {misha.denil,nando}@cs.ox.ac.uk Abstract This paper presents foundational theoretical results on distributed parameter estimation for undirected probabilistic graphical models. It introduces a general condition on composite likelihood decompositions of these models which guarantees the global consistency of distributed estimators, provided the local estimators are consistent. 1 Introduction Undirected probabilistic graphical models, also known as Markov Random Fields (MRFs), are a natural framework for modelling in networks, such as sensor networks and social networks [24, 11, 20]. In large-scale domains there is great interest in designing distributed learning algorithms to estimate parameters of these models from data [27, 13, 19]. Designing distributed algorithms in this setting is challenging because the distribution over variables in an MRF depends on the global structure of the model. In this paper we make several theoretical contributions to the design of algorithms for distributed parameter estimation in MRFs by showing how the recent works of Liu and Ihler [13] and of Mizrahi et al. [19] can both be seen as special cases of distributed composite likelihood. Casting these two works in a common framework allows us to transfer results between them, strengthening the results of both works. Mizrahi et al. introduced a theoretical result, known as the LAP condition, to show that it is possible to learn MRFs with untied parameters in a fully-parallel but globally consistent manner. Their result led to the construction of a globally consistent estimator, whose cost is linear in the number of cliques as opposed to exponential as in centralised maximum likelihood estimators. While remarkable, their results apply only to a specific factorisation, with the cost of learning being exponential in the size of the factors. While their factors are small for lattice-MRFs and other models of low degree, they can be as large as the original graph for other models, such as fully-observed Boltzmann machines [1]. In this paper, we introduce the Strong LAP Condition, which characterises a large class of composite likelihood factorisations for which it is possible to obtain global consistency, provided the local estimators are consistent. This much stronger condition enables us to construct linear and globally consistent distributed estimators for a much wider class of models than Mizrahi et al., including fully-connected Boltzmann machines. Using our framework we also show how the asymptotic theory of Liu and Ihler applies more generally to distributed composite likelihood estimators. In particular, the Strong LAP Condition provides a sufficient condition to guarantee the validity of a core assumption made in the theory of Liu and Ihler, namely that each local estimate for the parameter of a clique is a consistent estimator of the 1 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 Figure 1: Left: A simple 2d-lattice MRF to illustrate our notation. For node j = 7 we have N(xj) = {x4, x8}. Centre left: The 1-neighbourhood of the clique q = {x7, x8} including additional edges (dashed lines) present in the marginal over the 1-neighbourhood. Factors of this form are used by the LAP algorithm of Mizrahi et. al. Centre right: The MRF used by our conditional estimator of Section 5 when using the same domain as Mizrahi et. al. Right: A smaller neighbourhood which we show is also sufficient to estimate the clique parameter of q. corresponding clique parameter in the joint distribution. By applying the Strong LAP Condition to verify the assumption of Liu and Ihler, we are able to import their M-estimation results into the LAP framework directly, bridging the gap between LAP and consensus estimators. 2 Background Our goal is to estimate the D-dimensional parameter vector ✓of an MRF with the following Gibbs density or mass function: p(x | ✓) = 1 Z(✓) exp(− X c E(xc | ✓c)) (1) Here c 2 C is an index over the cliques of an undirected graph G = (V, E), E(xc | ✓c) is known as the energy or Gibbs potential, and Z(✓) is a normalizing term known as the partition function. When E(xc | ✓c) = −✓T c φc(xc), where φc(xc) is a local sufficient statistic derived from the values of the local data vector xc, this model is known as a maximum entropy or log-linear model. In this paper we do not restrict ourselves to a specific form for the potentials, leaving them as general functions; we require only that their parameters are identifiable. Throughout this paper we focus on the case where the xj’s are discrete random variables, however generalising our results to the continuous case is straightforward. The j-th node of G is associated with the random variable xj for j = 1, . . . , M, and the edge connecting nodes j and k represents the statistical interaction between xj and xk. By the HammersleyClifford Theorem [10], the random vector x satisfies the Markov property with respect to the graph G, i.e., p(xj|x−j) = p(xj|xN (xj)) for all j where x−j denotes all variables in x excluding xj, and xN (xj) are the variables in the neighbourhood of node j (variables associated with nodes in G directly connected to node j). 2.1 Centralised estimation The standard approach to parameter estimation in statistics is through maximum likelihood, which chooses parameters ✓by maximising LML(✓) = N Y n=1 p(xn | ✓) (2) (To keep the notation light, we reserve n to index the data samples. In particular, xn denotes the n-th |V|-dimensional data vector and xmn refers to the n-th observation of node m.) This estimator has played a central role in statistics as it has many desirable properties including consistency, efficiency and asymptotic normality. However, applying maximum likelihood estimation to an MRF is generally intractable since computing the value of log LML and its derivative require evaluating the partition function, and an expectation over the model, respectively. Both of these values involve a sum over exponentially many terms. 2 To surmount this difficulty it is common to approximate p(x | ✓) as a product over more tractable terms. This approach is known as composite likelihood and leads to an objective of the form LCL(✓) = N Y n=1 IY i=1 f i(xn, ✓i) (3) where ✓i denote the (possibly shared) parameters of each composite likelihood factor f i. Composite likelihood estimators are both well studied and widely applied [6, 14, 12, 7, 16, 2, 22, 4, 21]. In practice the f i terms are chosen to be easy to compute, and are typically local functions, depending only on some local region of the underlying graph G. An early and influential variant of composite likelihood is pseudo-likelihood (PL) [3], where f i(x, ✓i) is chosen to be the conditional distribution of xi given its neighbours, LP L(✓) = N Y n=1 M Y m=1 p(xmn | xN (xm)n, ✓m) (4) Since the joint distribution has a Markov structure with respect to the graph G, the conditional distribution for xm depends only on its neighbours, namely xN (xm). In general more statistically efficient composite likelihood estimators can be obtained by blocking, i.e. choosing the f i(x, ✓i) to be conditional or marginal likelihoods over blocks of variables, which may be allowed to overlap. Composite likelihood estimators are often divided into conditional and marginal variants, depending on whether the f i(x, ✓i) are formed from conditional or marginal likelihoods. In machine learning the conditional variant is quite popular [12, 7, 16, 15, 4] while the marginal variant has received less attention. In statistics, both the marginal and conditional variants of composite likelihood are well studied (see the comprehensive review of Varin et. al. [26]). An unfortunate difficulty with composite likelihood is that the estimators cannot be computed in parallel, since elements of ✓are often shared between the different factors. For a fixed value of ✓ the terms of log LCL decouple over data and over blocks of the decomposition; however, if ✓is not fixed then the terms remain coupled. 2.2 Consensus estimation Seeking greater parallelism, researchers have investigated methods for decoupling the sub-problems in composite likelihood. This leads to the class of consensus estimators, which perform parameter estimation independently in each composite likelihood factor. This approach results in parameters that are shared between factors being estimated multiple times, and a final consensus step is required to force agreement between the solutions from separate sub-problems [27, 13]. Centralised estimators enforce sub-problem agreement throughout the estimation process, requiring many rounds of communication in a distributed setting. Consensus estimators allow sub-problems to disagree during optimisation, enforcing agreement as a post-processing step which requires only a single round of communication. Liu and Ihler [13] approach distributed composite likelihood by optimising each term separately ˆ✓ i βi = arg max ✓βi N Y n=1 f i(xAi,n, ✓βi) ! (5) where Ai denotes the group of variables associated with block i, and ✓βi is the corresponding set of parameters. In this setting the sets βi ✓V are allowed to overlap, but the optimisations are carried out independently, so multiple estimates for overlapping parameters are obtained. Following Liu and Ihler we have used the notation ✓i = ✓βi to make this interdependence between factors explicit. The analysis of this setting proceeds by embedding each local estimator ˆ✓ i βi into a degenerate estimator ˆ✓ i for the global parameter vector ✓by setting ˆ✓ i c = 0 for c /2 βi. The degenerate estimators are combined into a single non-degenerate global estimate using different consensus operators, e.g. weighted averages of the ˆ✓ i. 3 The analysis of Liu and Ihler assumes that for each sub-problem i and for each c 2 βi (ˆ✓ i βi)c p! ✓c (6) i.e., each local estimate for the parameter of clique c is a consistent estimator of the corresponding clique parameter in the joint distribution. This assumption does not hold in general, and one of the contributions of this work is to give a general condition under which this assumption holds. The analysis of Liu and Ihler [13] considers the case where the local estimators in Equation 5 are arbitrary M-estimators [25], however their experiments address only the case of pseudo-likelihood. In Section 5 we prove that the factorisation used by pseudo-likelihood satisfies Equation 6, explaining the good results in their experiments. 2.3 Distributed estimation Consensus estimation dramatically increases the parallelism of composite likelihood estimates by relaxing the requirements on enforcing agreement between coupled sub-problems. Recently Mizrahi et. al. [19] have shown that if the composite likelihood factorisation is constructed correctly then consistent parameter estimates can be obtained without requiring a consensus step. In the LAP algorithm of Mizrahi et al. [19] the domain of each composite likelihood factor (which they call the auxiliary MRF) is constructed by surrounding each maximal clique q with the variables in its 1-neighbourhood Aq = [ c\q6=; c which contains all of the variables of q itself as well as the variables with at least one neighbour in q; see Figure 1 for an example. For MRFs of low degree the sets Aq are small, and consequently maximum likelihood estimates for parameters of MRFs over these sets can be obtained efficiently. The parametric form of each factor in LAP is chosen to coincide with the marginal distribution over Aq. The factorisation of Mizrahi et al. is essentially the same as in Equation 5, but the domain of each term is carefully selected, and the LAP theorems are proved only for the case where f i(xAq, ✓βq) = p(xAq, ✓βq). As in consensus estimation, parameter estimation in LAP is performed separately and in parallel for each term; however, agreement between sub-problems is handled differently. Instead of combining parameter estimates from different sub-problems, LAP designates a specific sub-problem as authoritative for each parameter (in particular the sub-problem with domain Aq is authoritative for the parameter ✓q). The global solution is constructed by collecting parameters from each sub-problem for which it is authoritative and discarding the rest. In order to obtain consistency for LAP, Mizrahi et al. [19] assume that both the joint distribution and each composite likelihood factor are parametrised using normalized potentials. Definition 1. A Gibbs potential E(xc|✓c) is said to be normalised with respect to zero if E(xc|✓c) = 0 whenever there exists t 2 c such that xt = 0. A perhaps under-appreciated existence and uniqueness theorem [9, 5] for MRFs states that there exists one and only one potential normalized with respect to zero corresponding to a Gibbs distribution. This result ensures a one to one correspondence between Gibbs distributions and normalised potential representations of an MRF. The consistency of LAP relies on the following observation. Suppose we have a Gibbs distribution p(xV | ✓) that factors according to the clique system C, and suppose that the parametrisation is chosen so that the potentials are normalised with respect to zero. For a particular clique of interest q, the marginal over xAq can be written as follows (see Appendix A for a detailed derivation) p(xAq | ✓) = 1 Z(✓) exp(−E(xq | ✓q) − X c2Cq\{q} E(xc | ✓V\q)) (7) 4 where Cq denotes the clique system of the marginal, which in general includes cliques not present in the joint. The same distribution can also be written in terms of different parameters ↵ p(xAq | ↵) = 1 Z(↵) exp(−E(xq | ↵q) − X c2Cq\{q} E(xc | ↵c)) (8) which are also assumed to be normalised with respect to zero. As shown in Mizrahi et. al. [19], the uniqueness of normalised potentials can be used to obtain the following result. Proposition 2 (LAP argument [19]). If the parametrisations of p(xV | ✓) and p(xAq | ↵) are chosen to be normalized with respect to zero, and if the parameters are identifiable with respect to the potentials, then ✓q = ↵q. This proposition enables Mizrahi et. al. [19] to obtain consistency for LAP under the standard smoothness and identifiability assumptions for MRFs [8]. 3 Contributions of this paper The strength of the results of Mizrahi et al. [19] is to show that it is possible to perform parameter estimation in a completely distributed way without sacrificing global consistency. They prove that through careful design of a composite likelihood factorisation it is possible to obtain estimates for each parameter of the joint distribution in isolation, without requiring even a final consensus step to enforce sub-problem agreement. Their weakness is that the LAP algorithm is very restrictive, requiring a specific composite likelihood factorisation. The strength of the results of Liu and Ihler [13] is that they apply in a very general setting (arbitrary M-estimators) and make no assumptions about the underlying structure of the MRF. On the other hand they assume the convergence in Equation 6, and do not characterise the conditions under which this assumption holds. The key to unifying these works is to notice that the specific decomposition used in LAP is chosen essentially to ensure the convergence of Equation 6. This leads to our development of the Strong LAP Condition and an associated Strong LAP Argument, which is a drop in replacement for the LAP argument of Mizrahi et al. and holds for a much larger range of composite likelihood factorisations than their original proof allows. Since the purpose of the Strong LAP Condition is to guarantee the convergence of Equation 6, we are able to import the results of Liu and Ihler [13] into the LAP framework directly, bridging the gap between LAP and consensus estimators. The same Strong LAP Condition also provides the necessary convergence guarantee for the results of Liu and Ihler to apply. Finally we show how the Strong LAP Condition can lead to the development of new estimators, by developing a new distributed estimator which subsumes the distributed pseudo-likelihood and gives estimates that are both consistent and asymptotically normal. 4 Strong LAP argument In this section we present the Strong LAP Condition, which provides a general condition under which the convergence of Equation 6 holds. This turns out to be intimately connected to the structure of the underlying graph. Definition 3 (Relative Path Connectivity). Let G = (V, E) be an undirected graph, and let A be a given subset of V. We say that two nodes i, j 2 A are path connected with respect to V \ A if there exists a path P = {i, s1, s2, . . . , sn, j} 6= {i, j} with none of the sk 2 A. Otherwise, we say that i, j are path disconnected with respect to V \ A. For a given A ✓V we partition the clique system of G into two parts, Cin A that contains all of the cliques that are a subset of A, and Cout A = C \ Cin A that contains the remaining cliques of G. Using this notation we can write the marginal distribution over xA as p(xA | ✓) = 1 Z(✓) exp(− X c2Cin A E(xc | ✓c)) X xV\A exp(− X c2Cout A E(xc | ✓c)) (9) 5 (a) i j k 1 2 3 4 5 6 0 1 2 3 4 5 (c) 0 1 2 3 4 5 0 1 2 3 4 5 (b) 0 1 2 3 4 5 0 1 (d) Figure 2: (a) Illustrating the concept of relative path connectivity. Here, A = {i, j, k}. While (k, j) are path connected via {3, 4} and (k, i) are path connected via {2, 1, 5}, the pair (i, j) are path disconnected with respect to V \A. (b)-(d) Illustrating the difference between LAP and Strong LAP. (b) Shows a star graph with q highlighted. (c) Shows Aq required by LAP. (d) Shows an alternative neighbourhood allowed by Strong LAP. Thus, if the root node is a response variable and the leafs are covariates, Strong LAP states we can estimate each parameter separately and consistently. Up to a normalisation constant, P xV\A exp(−P c2Cout A E(xc | ✓c)) induces a Gibbs density (and therefore an MRF) on A, which we refer to as the induced MRF. (For example, as illustrated in Figure 1 centre-left, the induced MRF involves all the cliques over the nodes 4, 5 and 9.) By the Hammersley-Clifford theorem this MRF has a corresponding graph which we refer to as the induced graph and denote GA. Note that the induced graph does not have the same structure as the marginal, it contains only edges which are created by summing over xV\A. Remark 4. To work in the general case, we assume throughout that that if an MRF contains the path {i, j, k} then summing over j creates the edge (i, k) in the marginal. Proposition 5. Let A be a subset of V, and let i, j 2 A. The edge (i, j) exists in the induced graph GA if and only if i and j are path connected with respect to V \ A. Proof. If i and j are path connected then there is a path P = {i, s1, s2, . . . , sn, j} 6= {i, j} with none of the sk 2 A. Summing over sk forms an edge (sk−1, sk+1). By induction, summing over s1, . . . , sn forms the edge (i, j). If i and j are path disconnected with respect to V \ A then summing over any s 2 V \ A cannot form the edge (i, j) or i and j would be path connected through the path {i, s, j}. By induction, if the edge (i, j) is formed by summing over s1, . . . , sn this implies that i and j are path connected via {i, s1, . . . , sn, j}, contradicting the assumption. Corollary 6. B ✓A is a clique in the induced graph GA if and only if all pairs of nodes in B are path connected with respect to V \ A. Definition 7 (Strong LAP condition). Let G = (V, E) be an undirected graph and let q 2 C be a clique of interest. We say that a set A such that q ✓A ✓V satisfies the strong LAP condition for q if there exist i, j 2 q such that i and j are path-disconnected with respect to V \ A. Proposition 8. Let G = (V, E) be an undirected graph and let q 2 C be a clique of interest. If Aq satisfies the Strong LAP condition for q then the joint distribution p(xV | ✓) and the marginal p(xAq | ✓) share the same normalised potential for q. Proof. If Aq satisfies the Strong LAP Condition for q then by Corollary 6 the induced MRF contains no potential for q. Inspection of Equation 9 reveals that the same E(xq | ✓q) appears as a potential in both the marginal and the joint distributions. The result follows by uniqueness of the normalised potential representation. We now restrict our attention to a set Aq which satisfies the Strong LAP Condition for a clique of interest q. The marginal over p(xAq | ✓) can be written as in Equation 9 in terms of ✓, or in terms of auxiliary parameters ↵ p(xAq | ↵) = 1 Z(↵) exp(− X c2Cq E(xc | ↵c)) (10) Where Cq is the clique system over the marginal. We will assume both parametrisations are normalised with respect to zero. Theorem 9 (Strong LAP Argument). Let q be a clique in G and let q ✓Aq ✓V. Suppose p(xV | ✓) and p(xAq | ↵) are parametrised so that their potentials are normalised with respect to zero and the parameters are identifiable with respect to the potentials. If Aq satisfies the Strong LAP Condition for q then ✓q = ↵q. 6 Proof. From Proposition 8 we know that p(xV | ✓) and p(xAq | ✓) share the same clique potential for q. Alternatively we can write the marginal distribution as in Equation 10 in terms of auxiliary variables ↵. By uniqueness, both parametrisations must have the same normalised potentials. Since the potentials are equal, we can match terms between the two parametrisations. In particular since E(xq | ✓q) = E(xq | ↵q) we see that ✓q = ↵q by identifiability. 4.1 Efficiency and the choice of decomposition Theorem 9 implies that distributed composite likelihood is consistent for a wide class of decompositions of the joint distribution; however it does not address the issue of statistical efficiency. This question has been studied empirically in the work of Meng et. al. [17, 18], who introduce a distributed algorithm for Gaussian random fields and consider neighbourhoods of different sizes. Meng et. al. find the larger neighbourhoods produce better empirical results and the following theorem confirms this observation. Theorem 10. Let A be set of nodes which satisfies the Strong LAP Condition for q. Let ˆ✓A be the ML parameter estimate of the marginal over A. If B is a superset of A, and ˆ✓B is the ML parameter estimate of the marginal over B. Then (asymptotically): |✓q −(ˆ✓B)q| |✓q −(ˆ✓A)q|. Proof. Suppose that |✓q −(ˆ✓B)q| > |✓q −(ˆ✓A)q|. Then the estimates ˆ✓A over the various subsets A of B improve upon the ML estimates of the marginal on B. This contradicts the Cramer-Rao lower bound achieved by the ML estimate of the marginal on B. In general the choice of decomposition implies a trade-off in computational and statistical efficiency. Larger factors are preferable from a statistical efficiency standpoint, but increase computation and decrease the degree of parallelism. 5 Conditional LAP The Strong LAP Argument tells us that if we construct composite likelihood factors using marginal distributions over domains that satisfy the Strong LAP Condition then the LAP algorithm of Mizrahi et. al. [19] remains consistent. In this section we show that more can be achieved. Once we have satisfied the Strong LAP Condition we know it is acceptable to match parameters between the joint distribution p(xV | ✓) and the auxiliary distribution p(xAq | ↵). To obtain a consistent LAP algorithm from this correspondence all that is required is to have a consistent estimate of ↵q. Mizrahi et. al. [19] achieve this by applying maximum likelihood estimation to p(xAq | ↵), but any consistent estimator is valid. We exploit this fact to show how the Strong LAP Argument can be applied to create a consistent conditional LAP algorithm, where conditional estimation is performed in each auxiliary MRF. This allows us to apply the LAP methodology to a broader class of models. For some models, such as large densely connected graphs, we cannot rely on the LAP algorithm of Mizrahi et. al. [19]. For example, for a restricted Boltzmann machine (RBM) [23], the 1-neighbourhood of any pairwise clique includes the entire graph. Hence, the complexity of LAP is exponential in the size of V. However, it is linear for conditional LAP, without sacrificing consistency. Theorem 11. Let q be a clique in G and let xj 2 q ✓Aq ✓V. If Aq satisfies the Strong LAP Condition for q then p(xV | ✓) and p(xj | xAq\{xj}, ↵) share the same normalised potential for q. Proof. We can write the conditional distribution of xj given Aq \ {xj} as p(xj | xAq\{xj}, ✓) = p(xAq | ✓) P xj p(xAq | ✓) (11) Both the numerator and the denominator of Equation 11 are Gibbs distributions, and can therefore be expressed in terms of potentials over clique systems. 7 Since Aq satisfies the Strong LAP Condition for q we know that p(xAq | ✓) and p(xV | ✓) have the same potential for q. Moreover, the domain of P xj p(xAq | ✓) does not include q, so it cannot contain a potential for q. We conclude that the potential for q in p(xj | xAq\{xj}, ✓) must be shared with p(xV | ✓). Remark 12. There exists a Gibbs representation normalised with respect to zero for p(xj | xAq\{xj}, ✓). Moreover, the clique potential for q is unique in that representation. Existence in the above remark is an immediate result of the the existence of normalized representation both for the numerator and denominator of Equation 11, and the fact that difference of normalised potentials is a normalized potential. For uniqueness, first note that p(xAq | ✓) = p(xj | xAq\{xj}, ✓)p(xAq\{xj}, ✓) The variable xj is not part of p(xAq\{xj}, ✓) and hence this distribution does not contain the clique q. Suppose there were two different normalised representations for the conditional p(xj | xAq\{xj}, ✓). This would then imply two normalised representations for the joint, which contradicts the fact that the joint has a unique normalized representation. We can now proceed as in the original LAP construction from Mizrahi et al. [19]. For a clique of interest q we find a set Aq which satisfies the Strong LAP Condition for q. However, instead of creating an auxiliary parametrisation of the marginal we create an auxiliary parametrisation of the conditional in Equation 11. p(xj | xAq\{xj}, ↵) = 1 Zj(↵) exp(− X c2CAq E(xc | ↵c)) (12) From Theorem 11 we know that E(xq | ↵q) = E(xq | ✓q). Equality of the parameters is also obtained, provided they are identifiable. Corollary 13. If Aq satisfies the Strong LAP Condition for q then any consistent estimator of ↵q in p(xj | xAq\{xj}, ↵) is also a consistent estimator of ✓q in p(xV | ✓). 5.1 Connection to distributed pseudo-likelihood and composite likelihood Theorem 11 tells us that if Aq satisfies the Strong LAP Condition for q then to estimate ✓q in p(xV | ✓) it is sufficient to have an estimate of ↵q in p(xj | xAq\{xj}, ↵) for any xj 2 q. This tells us that it is sufficient to use pseudo-likelihood-like conditional factors, provided that their domains satisfy the Strong LAP Condition. The following remark completes the connection by telling us that the Strong LAP Condition is satisfied by the specific domains used in the pseudo-likelihood factorisation. Remark 14. Let q = {x1, x2, .., xm} be a clique of interest, with 1-neighbourhood Aq = q [ {N(xi)}xi2q. Then for any xj 2 q, the set q [ N(xj) satisfies the Strong LAP Condition for q. Moreover, q [ N(xj) satisfies the Strong LAP Condition for all cliques in the graph that contain xj. Importantly, to estimate every unary clique potential we need to visit each node in the graph. However, to estimate pairwise clique potentials, visiting all nodes is redundant because the parameters of each pairwise clique are estimated twice. If a parameter is estimated more than once it is reasonable from a statistical standpoint to apply a consensus operator to obtain a single estimate. The theory of Liu and Ihler tells us that the consensus estimates are consistent and asymptotically normal, provided Equation 6 is satisfied. In turn, the Strong LAP Condition guarantees the convergence in Equation 6. We can go beyond pseudo-likelihood and consider either marginal or conditional factorisations over larger groups of variables. Since the asymptotic results of Liu and Ihler [13] apply to any distributed composite likelihood estimator where the convergence of Equation 6 holds, it follows that any distributed composite likelihood estimator where each factor satisfies the Strong LAP Condition (including LAP and the conditional composite likelihood estimator from Section 5) immediately gains asymptotic normality and variance guarantees as a result of their work and ours. 6 Conclusion We presented foundational theoretical results for distributed composite likelihood. The results provide us with sufficient conditions to apply the results of Liu and Ihler to a broad class of distributed estimators. The theory also led us to the construction of a new globally consistent estimator, whose complexity is linear even for many densely connected graphs. We view extending these results to model selection, tied parameters, models with latent variables, and inference tasks as very important avenues for future research. 8 References [1] D. H. Ackley, G. Hinton, and T. Sejnowski. A learning algorithm for Boltzmann machines. Cognitive Science, 9:147–169, 1985. [2] A. Asuncion, Q. Liu, A. Ihler, and P. Smyth. Learning with blocks: Composite likelihood and contrastive divergence. In Artificial Intelligence and Statistics, pages 33–40, 2010. [3] J. Besag. Spatial interaction and the statistical analysis of lattice systems. Journal of the Royal Statistical Society, Series B, 36:192–236, 1974. [4] J. K. Bradley and C. Guestrin. Sample complexity of composite likelihood. In Artificial Intelligence and Statistics, pages 136–160, 2012. [5] P. Bremaud. Markov Chains: Gibbs Fields, Monte Carlo Simulation, and Queues. Springer-Verlag, 2001. [6] B. Cox. Composite likelihood methods. Contemporary Mathematics, 80:221–239, 1988. [7] J. V. Dillon and G. Lebanon. Stochastic composite likelihood. Journal of Machine Learning Research, 11:2597–2633, 2010. [8] S. E. Fienberg and A. Rinaldo. Maximum likelihood estimation in log-linear models. The Annals of Statistics, 40(2):996–1023, 2012. [9] D. Griffeath. Introduction to random fields. In Denumerable Markov Chains, volume 40 of Graduate Texts in Mathematics, pages 425–458. Springer, 1976. [10] J. M. Hammersley and P. Clifford. Markov fields on finite graphs and lattices. 1971. [11] D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques. MIT Press, 2009. [12] P. Liang and M. I. Jordan. An asymptotic analysis of generative, discriminative, and pseudolikelihood estimators. In International Conference on Machine Learning, pages 584–591, 2008. [13] Q. Liu and A. Ihler. Distributed parameter estimation via pseudo-likelihood. In International Conference on Machine Learning, 2012. [14] K. V. Mardia, J. T. Kent, G. Hughes, and C. C. Taylor. Maximum likelihood estimation using composite likelihoods for closed exponential families. Biometrika, 96(4):975–982, 2009. [15] B. Marlin and N. de Freitas. Asymptotic efficiency of deterministic estimators for discrete energy-based models: Ratio matching and pseudolikelihood. In Uncertainty in Artificial Intelligence, pages 497–505, 2011. [16] B. Marlin, K. Swersky, B. Chen, and N. de Freitas. Inductive principles for restricted Boltzmann machine learning. In Artificial Intelligence and Statistics, pages 509–516, 2010. [17] Z. Meng, D. Wei, A. Wiesel, and A. O. Hero III. Distributed learning of Gaussian graphical models via marginal likelihoods. In Artificial Intelligence and Statistics, pages 39–47, 2013. [18] Z. Meng, D. Wei, A. Wiesel, and A. O. Hero III. Marginal likelihoods for distributed parameter estimation of Gaussian graphical models. Technical report, arXiv:1303.4756, 2014. [19] Y. Mizrahi, M. Denil, and N. de Freitas. Linear and parallel learning of Markov random fields. In International Conference on Machine Learning, 2014. [20] K. P. Murphy. Machine Learning: A Probabilistic Perspective. The MIT Press, 2012. [21] S. Nowozin. Constructing composite likelihoods in general random fields. In ICML Workshop on Inferning: Interactions between Inference and Learning, 2013. [22] S. Okabayashi, L. Johnson, and C. Geyer. Extending pseudo-likelihood for Potts models. Statistica Sinica, 21(1):331–347, 2011. [23] P. Smolensky. Information processing in dynamical systems: foundations of harmony theory. Parallel distributed processing: explorations in the microstructure of cognition, 1:194–281, 1986. [24] D. Strauss and M. Ikeda. Pseudolikelihood estimation for social networks. Journal of the American Statistical Association, 85(409):204–212, 1990. [25] A. W. van der Vaart. Asymptotic statistics. Cambridge University Press, 1998. [26] C. Varin, N. Reid, and D. Firth. An overview of composite likelihood methods. Statistica Sinica, 21:5–42, 2011. [27] A. Wiesel and A. Hero III. Distributed covariance estimation in Gaussian graphical models. IEEE Transactions on Signal Processing, 60(1):211–220, 2012. 9
|
2014
|
131
|
5,216
|
Two-Layer Feature Reduction for Sparse-Group Lasso via Decomposition of Convex Sets Jie Wang, Jieping Ye Computer Science and Engineering Arizona State University, Tempe, AZ 85287 {jie.wang.ustc, jieping.ye}@asu.edu Abstract Sparse-Group Lasso (SGL) has been shown to be a powerful regression technique for simultaneously discovering group and within-group sparse patterns by using a combination of the ℓ1 and ℓ2 norms. However, in large-scale applications, the complexity of the regularizers entails great computational challenges. In this paper, we propose a novel two-layer feature reduction method (TLFre) for SGL via a decomposition of its dual feasible set. The two-layer reduction is able to quickly identify the inactive groups and the inactive features, respectively, which are guaranteed to be absent from the sparse representation and can be removed from the optimization. Existing feature reduction methods are only applicable for sparse models with one sparsity-inducing regularizer. To our best knowledge, TLFre is the first one that is capable of dealing with multiple sparsity-inducing regularizers. Moreover, TLFre has a very low computational cost and can be integrated with any existing solvers. Experiments on both synthetic and real data sets show that TLFre improves the efficiency of SGL by orders of magnitude. 1 Introduction Sparse-Group Lasso (SGL) [5, 16] is a powerful regression technique in identifying important groups and features simultaneously. To yield sparsity at both group and individual feature levels, SGL combines the Lasso [18] and group Lasso [28] penalties. In recent years, SGL has found great success in a wide range of applications, including but not limited to machine learning [20, 27], signal processing [17], bioinformatics [14] etc. Many research efforts have been devoted to developing efficient solvers for SGL [5, 16, 10, 21]. However, when the feature dimension is extremely high, the complexity of the SGL regularizers imposes great computational challenges. Therefore, there is an increasingly urgent need for nontraditional techniques to address the challenges posed by the massive volume of the data sources. Recently, El Ghaoui et al. [4] proposed a promising feature reduction method, called SAFE screening, to screen out the so-called inactive features, which have zero coefficients in the solution, from the optimization. Thus, the size of the data matrix needed for the training phase can be significantly reduced, which may lead to substantial improvement in the efficiency of solving sparse models. Inspired by SAFE, various exact and heuristic feature screening methods have been proposed for many sparse models such as Lasso [25, 11, 19, 26], group Lasso [25, 22, 19], etc. It is worthwhile to mention that the discarded features by exact feature screening methods such as SAFE [4], DOME [26] and EDPP [25] are guaranteed to have zero coefficients in the solution. However, heuristic feature screening methods like Strong Rule [19] may mistakenly discard features which have nonzero coefficients in the solution. More recently, the idea of exact feature screening has been extended to exact sample screening, which screens out the nonsupport vectors in SVM [13, 23] and LAD [23]. As a promising data reduction tool, exact feature/sample screening would be of great practical importance because they can effectively reduce the data size without sacrificing the optimality [12]. 1 However, all of the existing feature/sample screening methods are only applicable for the sparse models with one sparsity-inducing regularizer. In this paper, we propose an exact two-layer feature screening method, called TLFre, for the SGL problem. The two-layer reduction is able to quickly identify the inactive groups and the inactive features, respectively, which are guaranteed to have zero coefficients in the solution. To the best of our knowledge, TLFre is the first screening method which is capable of dealing with multiple sparsity-inducing regularizers. We note that most of the existing exact feature screening methods involve an estimation of the dual optimal solution. The difficulty in developing screening methods for sparse models with multiple sparsity-inducing regularizers like SGL is that the dual feasible set is the sum of simple convex sets. Thus, to determine the feasibility of a given point, we need to know if it is decomposable with respect to the summands, which is itself a nontrivial problem (see Section 2). One of our major contributions is that we derive an elegant decomposition method of any dual feasible solutions of SGL via the framework of Fenchel’s duality (see Section 3). Based on the Fenchel’s dual problem of SGL, we motivate TLFre by an in-depth exploration of its geometric properties and the optimality conditions. We derive the set of the regularization parameter values corresponding to zero solutions. To develop TLFre, we need to estimate the upper bounds involving the dual optimal solution. To this end, we first give an accurate estimation of the dual optimal solution via the normal cones. Then, we formulate the estimation of the upper bounds via nonconvex optimization problems. We show that these nonconvex problems admit closed form solutions. Experiments on both synthetic and real data sets demonstrate that the speedup gained by TLFre in solving SGL can be orders of magnitude. All proofs are provided in the long version of this paper [24]. Notation: Let ∥·∥1, ∥·∥and ∥·∥∞be the ℓ1, ℓ2 and ℓ∞norms, respectively. Denote by Bn 1 , Bn, and Bn ∞the unit ℓ1, ℓ2, and ℓ∞norm balls in Rn (we omit the superscript if it is clear from the context). For a set C, let int C be its interior. If C is closed and convex, we define the projection operator as PC(w) := argminu∈C∥w −u∥. We denote by IC(·) the indicator function of C, which is 0 on C and ∞elsewhere. Let Γ0(Rn) be the class of proper closed convex functions on Rn. For f ∈Γ0(Rn), let ∂f be its subdifferential. The domain of f is the set dom f := {w : f(w) < ∞}. For w ∈Rn, let [w]i be its ith component. For γ ∈R, let sgn(γ) = sign(γ) if γ ̸= 0, and sgn(0) = 0. We define SGN(w) = s ∈Rn : [s]i ∈ sign([w]i), if [w]i ̸= 0; [−1, 1], if [w]i = 0. We denote by γ+ = max(γ, 0). Then, the shrinkage operator Sγ(w) : Rn →Rn with γ ≥0 is [Sγ(w)]i = (|[w]i| −γ)+sgn([w]i), i = 1, . . . , n. (1) 2 Basics and Motivation In this section, we briefly review some basics of SGL. Let y ∈RN be the response vector and X ∈RN×p be the matrix of features. With the group information available, the SGL problem [5] is min β∈Rp 1 2
y − XG g=1 Xgβg
2 + λ1 XG g=1 √ng∥βg∥+ λ2∥β∥1, (2) where ng is the number of features in the gth group, Xg ∈RN×ng denotes the predictors in that group with the corresponding coefficient vector βg, and λ1, λ2 are positive regularization parameters. Without loss of generality, let λ1 = αλ and λ2 = λ with α > 0. Then, problem (2) becomes: min β∈Rp 1 2
y − XG g=1 Xgβg
2 + λ α XG g=1 √ng∥βg∥+ ∥β∥1 . (3) By the Lagrangian multipliers method [24], the dual problem of SGL is sup θ n 1 2∥y∥2 −1 2
y λ −θ
2 : XT g θ ∈Dα g := α√ngB + B∞, g = 1, . . . , G o . (4) It is well-known that the dual feasible set of Lasso is the intersection of closed half spaces (thus a polytope); for group Lasso, the dual feasible set is the intersection of ellipsoids. Surprisingly, the geometric properties of these dual feasible sets play fundamentally important roles in most of the existing screening methods for sparse models with one sparsity-inducing regularizer [23, 11, 25, 4]. When we incorporate multiple sparse-inducing regularizers to the sparse models, problem (4) indicates that the dual feasible set can be much more complicated. Although (4) provides a geometric 2 description of the dual feasible set of SGL, it is not suitable for further analysis. Notice that, even the feasibility of a given point θ is not easy to determine, since it is nontrivial to tell if XT g θ can be decomposed into b1 + b2 with b1 ∈α√ngB and b2 ∈B∞. Therefore, to develop screening methods for SGL, it is desirable to gain deeper understanding of the sum of simple convex sets. In the next section, we analyze the dual feasible set of SGL in depth via the Fenchel’s Duality Theorem. We show that for each XT g θ ∈Dα g , Fenchel’s duality naturally leads to an explicit decomposition XT g θ = b1 + b2, with one belonging to α√ngB and the other one belonging to B∞. This lays the foundation of the proposed screening method for SGL. 3 The Fenchel’s Dual Problem of SGL In Section 3.1, we derive the Fenchel’s dual of SGL via Fenchel’s Duality Theorem. We then motivate TLFre and sketch our approach in Section 3.2. In Section 3.3, we discuss the geometric properties of the Fenchel’s dual of SGL and derive the set of (λ, α) leading to zero solutions. 3.1 The Fenchel’s Dual of SGL via Fenchel’s Duality Theorem To derive the Fenchel’s dual problem of SGL, we need the Fenchel’s Duality Theorem as stated in Theorem 1. The conjugate of f ∈Γ0(Rn) is the function f ∗∈Γ0(Rn) defined by f ∗(z) = supw ⟨w, z⟩−f(w). Theorem 1. [Fenchel’s Duality Theorem] Let f ∈Γ0(RN), Ω∈Γ0(Rp), and T (β) = y −Xβ be an affine mapping from Rp to RN. Let p∗, d∗∈[−∞, ∞] be primal and dual values defined, respectively, by the Fenchel problems: p∗= infβ∈Rp f(y −Xβ) + λΩ(β); d∗= supθ∈RN −f ∗(λθ) −λΩ∗(XT θ) + λ⟨y, θ⟩. One has p∗≥d∗. If, furthermore, f and Ωsatisfy the condition 0 ∈int (dom f −y + Xdom Ω), then the equality holds, i.e., p∗= d∗, and the supreme is attained in the dual problem if finite. We omit the proof of Theorem 1 since it is a slight modification of Theorem 3.3.5 in [2]. Let f(w) = 1 2∥w∥2, and λΩ(β) be the second term in (3). Then, SGL can be written as minβ f(y −Xβ) + λΩ(β). To derive the Fenchel’s dual problem of SGL, Theorem 1 implies that we need to find f ∗and Ω∗. It is well-known that f ∗(z) = 1 2∥z∥2. Therefore, we only need to find Ω∗, where the concept infimal convolution is needed. Let h, g ∈Γ0(Rn). The infimal convolution of h and g is defined by (h□g)(ξ) = infη h(η) + g(ξ −η), and it is exact at a point ξ if there exists a η∗(ξ) such that (h□g)(ξ) = h(η∗(ξ)) + g(ξ −η∗(ξ)). h□g is exact if it is exact at every point of its domain, in which case it is denoted by h ⊡g. Lemma 2. Let Ωα 1 (β) = α PG g=1 √ng∥βg∥, Ω2(β) = ∥β∥1 and Ω(β) = Ωα 1 (β) + Ω2(β). Moreover, let Cα g = α√ngB ⊂Rng, g = 1, . . . , G. Then, the following hold: (i): (Ωα 1 )∗(ξ) = PG g=1 ICα g (ξg) , (Ω2)∗(ξ) = PG g=1 IB∞(ξg), (ii): Ω∗(ξ) = ((Ωα 1 )∗⊡(Ω2)∗) (ξ) = PG g=1 IB ξg−PB∞(ξg) α√ng , where ξg ∈Rng is the sub-vector of ξ corresponding to the gth group. Note that PB∞(ξg) admits a closed form solution, i.e., [PB∞(ξg)]i = sgn ([ξg]i) min (|[ξg]i| , 1). Combining Theorem 1 and Lemma 2, the Fenchel’s dual of SGL can be derived as follows. Theorem 3. For the SGL problem in (3), the following hold: (i): The Fenchel’s dual of SGL is given by: inf θ 1 2∥y λ −θ∥2 −1 2∥y∥2 :
XT g θ −PB∞(XT g θ)
≤α√ng, g = 1, . . . , G . (5) (ii): Let β∗(λ, α) and θ∗(λ, α) be the optimal solutions of problems (3) and (5), respectively. Then, λθ∗(λ, α) =y −Xβ∗(λ, α), (6) XT g θ∗(λ, α) ∈α√ng∂∥β∗ g(λ, α)∥+ ∂∥β∗ g(λ, α)∥1, g = 1, . . . , G. (7) 3 Remark 1. We note that the shrinkage operator can also be expressed by Sγ(w) = w −PγB∞(w), γ ≥0. (8) Therefore, problem (5) can be written more compactly as inf θ 1 2∥y λ −θ∥2 −1 2∥y∥2 :
S1(XT g θ)
≤α√ng, g = 1, . . . , G . (9) Remark 2. Eq. (6) and Eq. (7) can be obtained by the Fenchel-Young inequality [2, 24]. They are the so-called KKT conditions [3] and can also be obtained by the Lagrangian multiplier method [24]. Moreover, for the SGL problem, its Lagrangian dual in (4) and Fenchel’s dual in (5) are indeed equivalent to each other [24]. Remark 3. An appealing advantage of the Fenchel’s dual in (5) is that we have a natural decomposition of all points ξg ∈Dα g : ξg = PB∞(ξg)+S1(ξg)) with PB∞(ξg) ∈B∞and S1(ξg) ∈Cα g . As a result, this leads to a convenient way to determine the feasibility of any dual variable θ by checking if S1(XT g θ) ∈Cα g , g = 1, . . . , G. 3.2 Motivation of the Two-Layer Screening Rules We motive the two-layer screening rules via the KKT condition in Eq. (7). As implied by the name, there are two layers in our method. The first layer aims to identify the inactive groups, and the second layer is designed to detect the inactive features for the remaining groups. by Eq. (7), we have the following cases by noting ∂∥w∥1 = SGN(w) and ∂∥w∥= (n w ∥w∥ o , if w ̸= 0, {u : ∥u∥≤1}, if w = 0. Case 1. If β∗ g(λ, α) ̸= 0, we have [XT g θ∗(λ, α)]i ∈ ( α√ng [β∗ g(λ,α)]i ∥β∗ g(λ,α)∥+ sign([β∗ g(λ, α)]i), if [β∗ g(λ, α)]i ̸= 0, [−1, 1], if [β∗ g(λ, α)]i = 0. (10) In view of Eq. (10), we can see that (a): S1(XT g θ∗(λ, α)) = α√ng β∗ g(λ1,λ2) ∥β∗ g(λ1,λ2)∥and ∥S1(XT g θ∗(λ, α))∥= α√ng, (11) (b): If [XT g θ∗(λ, α]i ≤1 then [β∗ g(λ, α)]i = 0. (12) Case 2. If β∗ g(λ, α) = 0, we have [XT g θ∗(λ, α)]i ∈α√ng[ug]i + [−1, 1], ∥ug∥≤1. (13) The first layer (group-level) of TLFre From (11) in Case 1, we have
S1(XT g θ∗(λ, α))
< α√ng ⇒β∗ g(λ, α) = 0. (R1) Clearly, (R1) can be used to identify the inactive groups and thus a group-level screening rule. The second layer (feature-level) of TLFre Let xgi be the ith column of Xg. We have [XT g θ∗(λ, α)]i = xT giθ∗(λ, α). In view of (12) and (13), we can see that xT giθ∗(λ, α) ≤1 ⇒[β∗ g(λ, α)]i = 0. (R2) Different from (R1), (R2) detects the inactive features and thus it is a feature-level screening rule. However, we cannot directly apply (R1) and (R2) to identify the inactive groups/features because both need to know θ∗(λ, α). Inspired by the SAFE rules [4], we can first estimate a region Θ containing θ∗(λ, α). Let XT g Θ = {XT g θ : θ ∈Θ}. Then, (R1) and (R2) can be relaxed as follows: supξg ∥S1(ξg)∥: ξg ∈Ξg ⊇XT g Θ < α√ng ⇒β∗ g(λ, α) = 0, (R1∗) supθ xT giθ : θ ∈Θ ≤1 ⇒[β∗ g(λ, α)]i = 0. (R2∗) Inspired by (R1∗) and (R2∗), we develop TLFre via the following three steps: Step 1. Given λ and α, we estimate a region Θ that contains θ∗(λ, α). Step 2. We solve for the supreme values in (R1∗) and (R2∗). Step 3. By plugging in the supreme values from Step 2, (R1∗) and (R2∗) result in the desired two-layer screening rules for SGL. 4 3.3 The Set of Parameter Values Leading to Zero Solution For notational convenience, let Fα g = {θ : ∥S1(XT g θ)∥≤α√ng}, g = 1, . . . , G; and thus the feasible set of the Fenchel’s dual of SGL is Fα = ∩g=1,...,G Fα g . In view of problem (5) [or (9)], we can see that θ∗(λ, α) is the projection of y/λ on Fα, i.e., θ∗(λ, α) = PFα(y/λ). Thus, if y/λ ∈Fα, we have θ∗(λ, α) = y/λ. Moreover, by (R1), we can see that β∗(λ, α) = 0 if y/λ is an interior point of Fα. Indeed, we have the following stronger result. Theorem 4. For the SGL problem, let λα max = maxg {ρg :
S1(XT g y/ρg)
= α√ng}. Then, y λ ∈Fα ⇔θ∗(λ, α) = y λ ⇔β∗(λ, α) = 0 ⇔λ ≥λα max. ρg in the definition of λα max admits a closed form solution [24]. Theorem 4 implies that the optimal solution β∗(λ, α) is 0 as long as y/λ ∈Fα. This geometric property also leads to an explicit characterization of the set of (λ1, λ2) such that the corresponding solution of problem (2) is 0. We denote by ¯β∗(λ1, λ2) the optimal solution of problem (2). Corollary 5. For the SGL problem in (2), let λmax 1 (λ2) = maxg 1 √ng ∥Sλ2(XT g y)∥. Then, (i): ¯β∗(λ1, λ2) = 0 ⇔λ1 ≥λmax 1 (λ2). (ii): If λ1 ≥λmax 1 := maxg 1 √ng ∥XT g y∥or λ2 ≥λmax 2 := ∥XT y∥∞, then ¯β∗(λ1, λ2) = 0. 4 The Two-Layer Screening Rules for SGL We follow the three steps in Section 3.2 to develop TLFre. In Section 4.1, we give an accurate estimation of θ∗(λ, α) via normal cones [15]. Then, we compute the supreme values in (R1∗) and (R2∗) by solving nonconvex problems in Section 4.2. We present the TLFre rules in Section 4.3. 4.1 Estimation of the Dual Optimal Solution Because of the geometric property of the dual problem in (5), i.e., θ∗(λ, α) = PFα(y/λ), we have a very useful characterization of the dual optimal solution via the so-called normal cones [15]. Definition 1. [15] For a closed convex set C ∈Rn and a point w ∈C, the normal cone to C at w is NC(w) = {v : ⟨v, w′ −w⟩≤0, ∀w′ ∈C}. (14) By Theorem 4, θ∗(¯λ, α) is known if ¯λ = λα max. Thus, we can estimate θ∗(λ, α) in terms of θ∗(¯λ, α). Due to the same reason, we only consider the cases with λ < λα max for θ∗(λ, α) to be estimated. Remark 4. In many applications, the parameter values that perform the best are usually unknown. To determine appropriate parameter values, commonly used approaches such as cross validation and stability selection involve solving SGL many times over a grip of parameter values. Thus, given {α(i)}I i=1 and λ(1) ≥· · · ≥λ(J ), we can fix the value of α each time and solve SGL by varying the value of λ. We repeat the process until we solve SGL for all of the parameter values. Theorem 6. For the SGL problem in (3), suppose that θ∗(¯λ, α) is known with ¯λ ≤λα max. Let ρg, g = 1, . . . , G, be defined by Theorem 4. For any λ ∈(0, ¯λ), we define nα(¯λ) = y/¯λ −θ∗(¯λ, α), if ¯λ < λα max, X∗S1(XT ∗y/λα max), if ¯λ = λα max, where X∗= argmaxXg ρg, vα(λ, ¯λ) = y λ −θ∗(¯λ, α), vα(λ, ¯λ)⊥= vα(λ, ¯λ) −⟨vα(λ,¯λ),nα(¯λ)⟩ ∥nα(¯λ)∥2 nα(¯λ). Then, the following hold: (i): nα(¯λ) ∈NFα(θ∗(¯λ, α)), (ii): ∥θ∗(λ, α) −(θ∗(¯λ, α) + 1 2v⊥ α (λ, ¯λ))∥≤1 2∥v⊥ α (λ, ¯λ)∥. For notational convenience, let oα(λ, ¯λ) = θ∗(¯λ, α) + 1 2v⊥ α (λ, ¯λ). Theorem 6 shows that θ∗(λ, α) lies inside the ball of radius 1 2∥v⊥ α (λ, ¯λ)∥centered at oα(λ, ¯λ). 4.2 Solving for the supreme values via Nonconvex Optimization We solve the optimization problems in (R1∗) and (R2∗). To simplify notations, let Θ = {θ : ∥θ −oα(λ, ¯λ)∥≤1 2∥v⊥ α (λ, ¯λ)∥}, (15) Ξg = ξg : ∥ξg −XT g oα(λ, ¯λ)∥≤1 2∥v⊥ α (λ, ¯λ)∥∥Xg∥2 , g = 1, . . . , G. (16) 5 Theorem 6 indicates that θ∗(λ, α) ∈Θ. Moreover, we can see that XT g Θ ⊆Ξg, g = 1, . . . , G. To develop the TLFre rule by (R1∗) and (R2∗), we need to solve the following optimization problems: s∗ g(λ, ¯λ; α) = supξg {∥S1(ξg)∥: ξg ∈Ξg}, g = 1, . . . , G, (17) t∗ gi(λ, ¯λ; α) = supθ {|xT giθ| : θ ∈Θ}, i = 1, . . . , ng, g = 1, . . . , G. (18) Solving problem (17) We consider the following equivalent problem of (17): 1 2 s∗ g(λ, ¯λ; α) 2 = supξg 1 2∥S1(ξg)∥2 : ξg ∈Ξg . (19) We can see that the objective function of problem (19) is continuously differentiable and the feasible set is a ball. Thus, (19) is a nonconvex problem because we need to maximize a convex function subject to a convex set. We derive the closed form solutions of problems (17) and (19) as follows. Theorem 7. For problems (17) and (19), let c = XT g oα(λ, ¯λ), r = 1 2∥v⊥ α (λ, ¯λ)∥∥Xg∥2 and Ξ∗ g be the set of the optimal solutions. (i) Suppose that c /∈B∞, i.e., ∥c∥∞> 1. Let u = rS1(c)/∥S1(c)∥. Then, s∗ g(λ, ¯λ; α) = ∥S1(c)∥+ r and Ξ∗ g = {c + u}. (20) (ii) Suppose that c is a boundary point of B∞, i.e., ∥c∥∞= 1. Then, s∗ g(λ, ¯λ; α) = r and Ξ∗ g = {c + u : u ∈NB∞(c), ∥u∥= r} . (21) (iii) Suppose that c ∈int B∞, i.e., ∥c∥∞< 1. Let i∗∈I∗= {i : |[c]i| = ∥c∥∞}. Then, s∗ g(λ, ¯λ; α) = (∥c∥∞+ r −1)+ , (22) Ξ∗ g = Ξg, if Ξg ⊂B∞, {c + r · sgn([c]i∗)ei∗: i∗∈I∗} , if Ξg ̸⊂B∞and c ̸= 0, {r · ei∗, −r · ei∗: i∗∈I∗} , if Ξg ̸⊂B∞and c = 0, where ei is the ith standard basis vector. Solving problem in (18) Problem (18) can be solved directly via the Cauchy-Schwarz inequality. Theorem 8. For problem (18), we have t∗ gi(λ, ¯λ; α) = |xT gioα(λ, ¯λ)| + 1 2∥v⊥ α (λ, ¯λ)∥∥xgi∥. 4.3 The Proposed Two-Layer Screening Rules To develop the two-layer screening rules for SGL, we only need to plug the supreme values s∗ g(λ2, ¯λ2; λ1) and t∗ gi(λ2, ¯λ2; λ1) in (R1∗) and (R2∗). We present the TLFre rule as follows. Theorem 9. For the SGL problem in (3), suppose that we are given α and a sequence of parameter values λα max = λ(0) > λ(1) > . . . > λ(J ). For each integer 0 ≤j < J , we assume that β∗(λ(j), α) is known. Let θ∗(λ(j), α), v⊥ α (λ(j+1), λ(j)) and s∗ g(λ(j+1), λ(j); α) be given by Eq. (6), Theorems 6 and 7, respectively. Then, for g = 1, . . . , G, the following holds s∗ g(λ(j+1), λ(j); α) < α√ng ⇒β∗ g(λ(j+1), α) = 0. (L1) For the ˆgth group that does not pass the rule in (L1), we have [β∗ ˆg(λ(j+1), α)]i = 0 if xT ˆgi y−Xβ∗(λ(j),α) λ(j) + 1 2v⊥ α (λ(j+1), λ(j)) + 1 2∥v⊥ α (λ(j+1), λ(j))∥∥xˆgi∥≤1. (L2) (L1) and (L2) are the first layer and second layer screening rules of TLFre, respectively. 5 Experiments We evaluate TLFre on both synthetic and real data sets. To measure the performance of TLFre, we compute the rejection ratios of (L1) and (L2), respectively. Specifically, let m be the number of features that have 0 coefficients in the solution, G be the index set of groups that are discarded by (L1) and p be the number of inactive features that are detected by (L2). The rejection ratios of (L1) and (L2) are defined by r1 = P g∈G ng m and r2 = |p| m , respectively. Moreover, we report the speedup gained by TLFre, i.e., the ratio of the running time of solver without screening to the running time of solver with TLFre. The solver used in this paper is from SLEP [9]. To determine appropriate values of α and λ by cross validation or stability selection, we can run TLFre with as many parameter values as we need. Given a data set, for illustrative purposes only, we select seven values of α from {tan(ψ) : ψ = 5◦, 15◦, 30◦, 45◦, 60◦, 75◦, 85◦}. Then, for each value of α, we run TLFre along a sequence of 100 values of λ equally spaced on the logarithmic scale of λ/λα max from 1 to 0.01. Thus, 700 pairs of parameter values of (λ, α) are sampled in total. 6 0 200 400 600 800 0 100 200 300 400 λ2 λ1 λmax 1 (λ2) α = tan(5◦) α = tan(15◦) α = tan(30◦) α = tan(45◦) α = tan(60◦) α = tan(75◦) α = tan(85◦) (a) 0.01 0.02 0.04 0.1 0.2 0.4 1 0.1 0.3 0.5 0.7 0.9 1 λ/λα max Rejection Ratio (b) α = tan(5◦) 0.01 0.02 0.04 0.1 0.2 0.4 1 0.1 0.3 0.5 0.7 0.9 1 λ/λα max Rejection Ratio (c) α = tan(15◦) 0.01 0.02 0.04 0.1 0.2 0.4 1 0.1 0.3 0.5 0.7 0.9 1 λ/λα max Rejection Ratio (d) α = tan(30◦) 0.01 0.02 0.04 0.1 0.2 0.4 1 0.1 0.3 0.5 0.7 0.9 1 λ/λα max Rejection Ratio (e) α = tan(45◦) 0.01 0.02 0.04 0.1 0.2 0.4 1 0.1 0.3 0.5 0.7 0.9 1 λ/λα max Rejection Ratio (f) α = tan(60◦) 0.01 0.02 0.04 0.1 0.2 0.4 1 0.1 0.3 0.5 0.7 0.9 1 λ/λα max Rejection Ratio (g) α = tan(75◦) 0.01 0.02 0.04 0.1 0.2 0.4 1 0.1 0.3 0.5 0.7 0.9 1 λ/λα max Rejection Ratio (h) α = tan(85◦) Figure 1: Rejection ratios of TLFre on the Synthetic 1 data set. 0 500 1000 0 200 400 600 λ2 λ1 λmax 1 (λ2) α = tan(5◦) α = tan(15◦) α = tan(30◦) α = tan(45◦) α = tan(60◦) α = tan(75◦) α = tan(85◦) (a) 0.01 0.02 0.04 0.1 0.2 0.4 1 0.1 0.3 0.5 0.7 0.9 1 λ/λα max Rejection Ratio (b) α = tan(5◦) 0.01 0.02 0.04 0.1 0.2 0.4 1 0.1 0.3 0.5 0.7 0.9 1 λ/λα max Rejection Ratio (c) α = tan(15◦) 0.01 0.02 0.04 0.1 0.2 0.4 1 0.1 0.3 0.5 0.7 0.9 1 λ/λα max Rejection Ratio (d) α = tan(30◦) 0.01 0.02 0.04 0.1 0.2 0.4 1 0.1 0.3 0.5 0.7 0.9 1 λ/λα max Rejection Ratio (e) α = tan(45◦) 0.01 0.02 0.04 0.1 0.2 0.4 1 0.1 0.3 0.5 0.7 0.9 1 λ/λα max Rejection Ratio (f) α = tan(60◦) 0.01 0.02 0.04 0.1 0.2 0.4 1 0.1 0.3 0.5 0.7 0.9 1 λ/λα max Rejection Ratio (g) α = tan(75◦) 0.01 0.02 0.04 0.1 0.2 0.4 1 0.1 0.3 0.5 0.7 0.9 1 λ/λα max Rejection Ratio (h) α = tan(85◦) Figure 2: Rejection ratios of TLFre on the Synthetic 2 data set. 5.1 Simulation Studies We perform experiments on two synthetic data sets that are commonly used in the literature [19, 29]. The true model is y = Xβ∗+ 0.01ϵ, ϵ ∼N(0, 1). We generate two data sets with 250 × 10000 entries: Synthetic 1 and Synthetic 2. We randomly break the 10000 features into 1000 groups. For Synthetic 1, the entries of the data matrix X are i.i.d. standard Gaussian with pairwise correlation zero, i.e., corr(xi, xi) = 0. For Synthetic 2, the entries of the data matrix X are drawn from i.i.d. standard Gaussian with pairwise correlation 0.5|i−j|, i.e., corr(xi, xj) = 0.5|i−j|. To construct β∗, we first randomly select γ1 percent of groups. Then, for each selected group, we randomly select γ2 percent of features. The selected components of β∗are populated from a standard Gaussian and the remaining ones are set to 0. We set γ1 = γ2 = 10 for Synthetic 1 and γ1 = γ2 = 20 for Synthetic 2. The figures in the upper left corner of Fig. 1 and Fig. 2 show the plots of λmax 1 (λ2) (see Corollary 5) and the sampled parameter values of λ and α (recall that λ1 = αλ and λ2 = λ). For the other figures, the blue and red regions represent the rejection ratios of (L1) and (L2), respectively. We can see that TLFre is very effective in discarding inactive groups/features; that is, more than 90% of inactive features can be detected. Moreover, we can observe that the first layer screening (L1) becomes more effective with a larger α. Intuitively, this is because the group Lasso penalty plays a more important role in enforcing the sparsity with a larger value of α (recall that λ1 = αλ). The top and middle parts of Table 1 indicate that the speedup gained by TLFre is very significant (up to 30 times) and TLFre is very efficient. Compared to the running time of the solver without screening, the running time of TLFre is negligible. The running time of TLFre includes that of computing ∥Xg∥2, g = 1, . . . , G, which can be efficiently computed by the power method [6]. Indeed, this can be shared for TLFre with different parameter values. 5.2 Experiments on Real Data Set We perform experiments on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) data set (http://adni.loni.usc.edu/). The data matrix consists of 747 samples with 426040 single 7 Table 1: Running time (in seconds) for solving SGL along a sequence of 100 tuning parameter values of λ equally spaced on the logarithmic scale of λ/λα max from 1.0 to 0.01 by (a): the solver [9] without screening; (b): the solver combined with TLFre. The top and middle parts report the results of TLFre on Synthetic 1 and Synthetic 2. The bottom part reports the results of TLFre on the ADNI data set with the GMV data as response. α tan(5◦) tan(15◦) tan(30◦) tan(45◦) tan(60◦) tan(75◦) tan(85◦) Synthetic 1 solver 298.36 301.74 308.69 307.71 311.33 307.53 291.24 TLFre 0.77 0.78 0.79 0.79 0.81 0.79 0.77 TLFre+solver 10.26 12.47 15.73 17.69 19.71 21.95 22.53 speedup 29.09 24.19 19.63 17.40 15.79 14.01 12.93 Synthetic 2 solver 294.64 294.92 297.29 297.50 297.59 295.51 292.24 TLFre 0.79 0.80 0.80 0.81 0.81 0.81 0.82 TLFre+solver 11.05 12.89 16.08 18.90 20.45 21.58 22.80 speedup 26.66 22.88 18.49 15.74 14.55 13.69 12.82 ADNI+GMV solver 30652.56 30755.63 30838.29 31096.10 30850.78 30728.27 30572.35 TLFre 64.08 64.56 64.96 65.00 64.89 65.17 65.05 TLFre+solver 372.04 383.17 386.80 402.72 391.63 385.98 382.62 speedup 82.39 80.27 79.73 77.22 78.78 79.61 79.90 0 50 100 150 0 50 100 λ2 λ1 λmax 1 (λ2) α = tan(5◦) α = tan(15◦) α = tan(30◦) α = tan(45◦) α = tan(60◦) α = tan(75◦) α = tan(85◦) (a) 0.01 0.02 0.04 0.1 0.2 0.4 1 0.1 0.3 0.5 0.7 0.9 1 λ/λα max Rejection Ratio (b) α = tan(5◦) 0.01 0.02 0.04 0.1 0.2 0.4 1 0.1 0.3 0.5 0.7 0.9 1 λ/λα max Rejection Ratio (c) α = tan(15◦) 0.01 0.02 0.04 0.1 0.2 0.4 1 0.1 0.3 0.5 0.7 0.9 1 λ/λα max Rejection Ratio (d) α = tan(30◦) 0.01 0.02 0.04 0.1 0.2 0.4 1 0.1 0.3 0.5 0.7 0.9 1 λ/λα max Rejection Ratio (e) α = tan(45◦) 0.01 0.02 0.04 0.1 0.2 0.4 1 0.1 0.3 0.5 0.7 0.9 1 λ/λα max Rejection Ratio (f) α = tan(60◦) 0.01 0.02 0.04 0.1 0.2 0.4 1 0.1 0.3 0.5 0.7 0.9 1 λ/λα max Rejection Ratio (g) α = tan(75◦) 0.01 0.02 0.04 0.1 0.2 0.4 1 0.1 0.3 0.5 0.7 0.9 1 λ/λα max Rejection Ratio (h) α = tan(85◦) Figure 3: Rejection ratios of TLFre on the ADNI data set with grey matter volume as response. nucleotide polymorphisms (SNPs), which are divided into 94765 groups. The response vector is the grey matter volume (GMV). The figure in the upper left corner of Fig. 3 shows the plots of λmax 1 (λ2) (see Corollary 5) and the sampled parameter values of α and λ. The other figures present the rejection ratios of (L1) and (L2) by blue and red regions, respectively. We can see that almost all of the inactive groups/features are discarded by TLFre. The rejection ratios of r1 + r2 are very close to 1 in all cases. The bottom part of Table 1 shows that TLFre leads to a very significant speedup (about 80 times). In other words, the solver without screening needs about eight and a half hours to solve the 100 SGL problems for each value of α. However, combined with TLFre, the solver needs only six to eight minutes. Moreover, we can observe that the computational cost of TLFre is negligible compared to that of the solver without screening. This demonstrates the efficiency of TLFre. 6 Conclusion In this paper, we propose a novel feature reduction method for SGL via decomposition of convex sets. We also derive the set of parameter values that lead to zero solutions of SGL. To the best of our knowledge, TLFre is the first method which is applicable to sparse models with multiple sparsity-inducing regularizers. More importantly, the proposed approach provides novel framework for developing screening methods for complex sparse models with multiple sparsity-inducing regularizers, e.g., ℓ1 SVM that performs both sample and feature selection, fused Lasso and tree Lasso with more than two regularizers. Experiments on both synthetic and real data sets demonstrate the effectiveness and efficiency of TLFre. We plan to generalize the idea of TLFre to ℓ1 SVM, fused Lasso and tree Lasso, which are expected to consist of multiple layers of screening. 8 References [1] H. H. Bauschke and P. L. Combettes. Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, 2011. [2] J. Borwein and A. Lewis. Convex Analysis and Nonlinear Optimization, Second Edition. Canadian Mathematical Society, 2006. [3] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004. [4] L. El Ghaoui, V. Viallon, and T. Rabbani. Safe feature elimination in sparse supervised learning. Pacific Journal of Optimization, 8:667–698, 2012. [5] J. Friedman, T. Hastie, and R. Tibshirani. A note on the group lasso and a sparse group lasso. arXiv:1001.0736. [6] N. Halko, P. Martinsson, and J. Tropp. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM Review, 53:217–288, 2011. [7] J.-B. Hiriart-Urruty. From convex optimization to nonconvex optimization. necessary and sufficient conditions for global optimality. In Nonsmooth optimization and related topics. Springer, 1988. [8] J.-B. Hiriart-Urruty. A note on the Legendre-Fenchel transform of convex composite functions. In Nonsmooth Mechanics and Analysis. Springer, 2006. [9] J. Liu, S. Ji, and J. Ye. SLEP: Sparse Learning with Efficient Projections. Arizona State University, 2009. [10] J. Liu and J. Ye. Moreau-Yosida regularization for grouped tree structure learning. In Advances in neural information processing systems, 2010. [11] J. Liu, Z. Zhao, J. Wang, and J. Ye. Safe screening with variational inequalities and its application to lasso. In International Conference on Machine Learning, 2014. [12] K. Ogawa, Y. Suzuki, S. Suzumura, and I. Takeuchi. Safe sample screening for Support Vector Machine. arXiv:1401.6740, 2014. [13] K. Ogawa, Y. Suzuki, and I. Takeuchi. Safe screening of non-support vectors in pathwise SVM computation. In ICML, 2013. [14] J. Peng, J. Zhu, A. Bergamaschi, W. Han, D. Noh, J. Pollack, and P. Wang. Regularized multivariate regression for indentifying master predictors with application to integrative genomics study of breast cancer. The Annals of Appliced Statistics, 4:53–77, 2010. [15] A. Ruszczy´nski. Nonlinear Optimization. Princeton University Press, 2006. [16] N. Simon, J. Friedman., T. Hastie., and R. Tibshirani. A Sparse-Group Lasso. Journal of Computational and Graphical Statistics, 22:231–245, 2013. [17] P. Sprechmann, I. Ram´ırez, G. Sapiro., and Y. Eldar. C-HiLasso: a collaborative hierarchical sparse modeling framework. IEEE Transactions on Signal Processing, 59:4183–4198, 2011. [18] R. Tibshirani. Regression shringkage and selection via the lasso. Journal of the Royal Statistical Society Series B, 58:267–288, 1996. [19] R. Tibshirani, J. Bien, J. Friedman, T. Hastie, N. Simon, J. Taylor, and R. Tibshirani. Strong rules for discarding predictors in lasso-type problems. Journal of the Royal Statistical Society Series B, 74:245– 266, 2012. [20] M. Vidyasagar. Machine learning methods in the cocomputation biology of cancer. In Proceedings of the Royal Society A, 2014. [21] M. Vincent and N. Hansen. Sparse group lasso and high dimensional multinomial classification. Computational Statistics and Data Analysis, 71:771–786, 2014. [22] J. Wang, J. Jun, and J. Ye. Efficient mixed-norm regularization: Algorithms and safe screening methods. arXiv:1307.4156v1. [23] J. Wang, P. Wonka, and J. Ye. Scaling svm and least absolute deviations via exact data reduction. In International Conference on Machine Learning, 2014. [24] J. Wang and J. Ye. Two-Layer feature reduction for sparse-group lasso via decomposition of convex sets. arXiv:1410.4210v1, 2014. [25] J. Wang, J. Zhou, P. Wonka, and J. Ye. Lasso screening rules via dual polytope projection. In Advances in neural information processing systems, 2013. [26] Z. J. Xiang and P. J. Ramadge. Fast lasso screening tests based on correlations. In IEEE ICASSP, 2012. [27] D. Yogatama and N. Smith. Linguistic structured sparsity in text categorization. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, 2014. [28] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society Series B, 68:49–67, 2006. [29] H. Zou and T. Hastie. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society Series B, 67:301–320, 2005. 9
|
2014
|
132
|
5,217
|
The Large Margin Mechanism for Differentially Private Maximization Kamalika Chaudhuri UC San Diego La Jolla, CA kamalika@cs.ucsd.edu Daniel Hsu Columbia University New York, NY djhsu@cs.columbia.edu Shuang Song UC San Diego La Jolla, CA shs037@eng.ucsd.edu Abstract A basic problem in the design of privacy-preserving algorithms is the private maximization problem: the goal is to pick an item from a universe that (approximately) maximizes a data-dependent function, all under the constraint of differential privacy. This problem has been used as a sub-routine in many privacy-preserving algorithms for statistics and machine learning. Previous algorithms for this problem are either range-dependent—i.e., their utility diminishes with the size of the universe—or only apply to very restricted function classes. This work provides the first general purpose, range-independent algorithm for private maximization that guarantees approximate differential privacy. Its applicability is demonstrated on two fundamental tasks in data mining and machine learning. 1 Introduction Differential privacy [17] is a cryptographically motivated definition of privacy that has recently gained significant attention in the data mining and machine learning communities. An algorithm for processing sensitive data enforces differential privacy by ensuring that the likelihood of any outcome does not change by much when a single individual’s private data changes. Privacy is typically guaranteed by adding noise either to the sensitive data, or to the output of an algorithm that processes the sensitive data. For many machine learning tasks, this leads to a corresponding degradation in accuracy or utility. Thus a central challenge in differentially private learning is to design algorithms with better tradeoffs between privacy and utility for a wide variety of statistics and machine learning tasks. In this paper, we study the private maximization problem, a fundamental problem that arises while designing privacy-preserving algorithms for a number of statistical and machine learning applications. We are given a sensitive dataset D ⊆X n comprised of records from n individuals. We are also given a data-dependent objective function f : U × X n →R, where U is a universe of K items to choose from, and f(i, ·) is (1/n)-Lipschitz for all i ∈U. That is, |f(i, D′)−f(i, D′′)| ≤1/n for all i and for any D′, D′′ ∈X n differing in just one individual’s entry. Always selecting an item that exactly maximizes f(·, D) is generally non-private, so the goal is to select, in a differentially private manner, an item i ∈U with as high an objective f(i, D) as possible. This is a very general algorithmic problem that arises in many applications, include private PAC learning [25] (choosing the most accurate classifier), private decision tree induction [21] (choosing the most informative split), private frequent itemset mining [5] (choosing the most frequent itemset), private validation [12] (choosing the best tuning parameter), and private multiple hypothesis testing [32] (choosing the most likely hypothesis). The most common algorithms for this problem are the exponential mechanism [28], and a computationally efficient alternative from [5], which we call the max-of-Laplaces mechanism. These 1 algorithms are general—they do not require any additional conditions on f to succeed—and hence have been widely applied. However, a major limitation of both algorithms is that their utility suffers from an explicit range-dependence: the utility deteriorates with increasing universe size. The range-dependence persists even when there is a single clear maximizer of f(·, D), or a few near maximizers, and even when the maximizer remains the same after changing the entries of a large number of individuals in the data. Getting around range-dependence has therefore been a goal for designing algorithms for this problem. This problem has also been addressed by recent algorithms of [31, 3], who provide algorithms that are range-independent and satisfy approximate differential privacy, a relaxed version of differential privacy. However, none of these algorithms is general; they explicitly fail unless additional special conditions on f hold. For example, the algorithm from [31] provides a range-independent result only when there is a single clear maximizer i∗such that f(i∗, D) is greater than the second highest value by some margin; the algorithm from [3] also has restrictive conditions that limit its applicability (see Section 2.2). Thus, a challenge is to develop a private maximization algorithm that is both rangeindependent and free of additional conditions; this is necessary to ensure that an algorithm is widely applicable and provides good utility when the universe size is large. In this work, we provide the first such general purpose range-independent private maximization algorithm. Our algorithm is based on two key insights. The first is that private maximization is easier when there is a small set of near maximizing items j ∈U for which f(j, D) is close to the maximum value maxi∈U f(i, D). A plausible algorithm based on this insight is to first find a set of near maximizers, and then run the exponential mechanism on this set. However, finding this set directly in a differentially private manner is very challenging. Our second insight is that only the number ℓof near maximizers needs to be found in a differentially private manner—a task that is considerably easier. Provided there is a margin between the maximum value and the (ℓ+ 1)-th maximum value of f(i, D), running the exponential mechanism on the items with the top ℓvalues of f(i, D) results in approximate differential privacy as well as good utility. Our algorithm, which we call the large margin mechanism, automatically exploits large margins when they exist to simultaneously (i) satisfy approximate differential privacy (Theorem 2), as well as (ii) provide a utility guarantee that depends (logarithmically) only on the number of near maximizers, rather than the universe size (Theorem 3). We complement our algorithm with a lower bound, showing that the utility of any approximate differentially private algorithm must deteriorate with the number of near maximizers (Theorem 1). A consequence of our lower bound is that rangeindependence cannot be achieved with pure differential privacy (Proposition 1), which justifies our relaxation to approximate differential privacy. Finally, we show the applicability of our algorithm to two problems from data mining and machine learning: frequent itemset mining and private PAC learning. For the first problem, an application of our method gives the first algorithm for frequent itemset mining that simultaneously guarantees approximate differential privacy and utility independent of the itemset universe size. For the second problem, our algorithm achieves tight sample complexity bounds for private PAC learning analogous to the shell bounds of [26] for non-private learning. 2 Background This section reviews differential privacy and introduces the private maximization problem. 2.1 Definitions of Differential Privacy and Private Maximization For the rest of the paper, we consider randomized algorithms A : X n →∆(S) that take as input datasets D ∈X n comprised of records from n individuals, and output values in a range S. Two datasets D, D′ ∈X n are said to be neighbors if they differ in a single individual’s entry. A function φ : X n →R is L-Lipschitz if |φ(D) −φ(D′)| ≤L for all neighbors D, D′ ∈X n. The following definitions of (approximate) differential privacy are from [17] and [20]. Definition 1 (Differential Privacy). A randomized algorithm A : X n →∆(S) is said to be (α, δ)approximate differentially private if, for all neighbors D, D′ ∈X n and all S ⊆S, Pr(A(D) ∈S) ≤eα Pr(A(D′) ∈S) + δ. 2 The algorithm A is α-differentially private if it is (α, 0)-approximate differentially private. Smaller values of the privacy parameters α > 0 and δ ∈[0, 1] imply stronger guarantees of privacy. Definition 2 (Private Maximization). In the private maximization problem, a sensitive dataset D ⊆X n comprised of records from n individuals is given as input; there is also a universe U := {1, . . . , K} of K items, and a function f : U × X n →R such that f(i, ·) is (1/n)-Lipschitz for all i ∈U. The goal is to return an item i ∈U such that f(i, D) is as large as possible while satisfying (approximate) differential privacy. Always returning the exact maximizer of f(·, D) is non-private, as changing a single individuals’ private values can potentially change the maximizer. Our goal is to design a randomized algorithm that outputs an approximate maximizer with high probability. (We loosely refer to the expected f(·, D) value of the chosen item as the utility of the algorithm.) Note that this problem is different from private release of the maximum value of f(·, D); a solution for the latter is easily obtained by adding Laplace noise with standard deviation O(1/(αn)) to maxi∈U f(i, D) [17]. Privately returning a nearly maximizing item itself is much more challenging. Private maximization is a core problem in the design of differentially private algorithms, and arises in numerous statistical and machine learning tasks. The examples of frequent itemset mining and PAC learning are discussed in Sections 4.1 and 4.2. 2.2 Previous Algorithms for Private Maximization The standard algorithm for private maximization is the exponential mechanism [28]. Given a privacy parameter α > 0, the exponential mechanism randomly draws an item i ∈U with probability pi ∝enαf(i,D)/2; this guarantees α-differential privacy. While the exponential mechanism is widely used because of its generality, a major limitation is its range-dependence—i.e., its utility diminishes with the universe size K. To be more precise, consider the following example where X := U = [K] and f(i, D) := 1 n |{j ∈[n] : Dj ≥i}| (1) (where Dj is the j-th entry in the dataset D). When D = (1, 1, . . . , 1), there is a clear maximizer i∗= 1, which only changes when the entries of at least n/2 individuals in D change. It stands to reason that any algorithm should report i = 1 in this case with high probability. However, the exponential mechanism outputs i = 1 only with probability enα/2/(K −1 + enα/2), which is small unless n = Ω(log(K)/α). This implies that the utility of the exponential mechanism deteriorates with K. Another general purpose algorithm is the max-of-Laplaces mechanism from [5]. Unfortunately, this algorithm is also range-dependent. Indeed, our first observation is that all α-differentially private algorithms that succeed on a wide class of private maximization problems share this same drawback. Proposition 1 (Lower bound for differential privacy). Let A be any α-differentially private algorithm for private maximization, α ∈(0, 1), and n ≥2. There exists a domain X, a function f : U ×X n →R such that f(i, ·) is (1/n)-Lipschitz for all i ∈U, and a dataset D ∈X n such that: Pr f(A(D), D) > max i∈U f(i, D) −log K−1 2 αn ! < 1 2. We remark that results similar to Proposition 1 have appeared in [23, 2, 10, 11, 7]; we simply reframe those results here in the context of private maximization. Proposition 1 implies that in order to remove range-dependence, we need to relax the privacy notion. We consider a relaxation of the privacy constraint to (α, δ)-approximate differential privacy with δ > 0. The approximate differentially private algorithm from [31] applies in the case where there is a single clear maximizer whose value is much larger than that of the rest. This algorithm adds Laplace noise with standard deviation O(1/(αn)) to the difference between the largest and the second-largest values of f(·, D), and outputs the maximizer if this noisy difference is larger than O(log(1/δ)/(αn)); 3 otherwise, it outputs Fail. Although this solution has high utility for the example in (1) with D = (1, 1, . . . , 1), it fails even when there is a single additional item j ∈U with f(j, D) close to the maximum value; for instance, D = (2, 2, . . . , 2). [3] provides an approximate differentially private algorithm that applies when f satisfies a condition called ℓ-bounded growth. This condition entails the following: first, for any i ∈U, adding a single individual to any dataset D can either keep f(i, D) constant, or increase it by 1/n; and second, f(i, D) can only increase in this case for at most ℓitems i ∈U. The utility of this algorithm depends only on log ℓ, rather than log K. In contrast, our algorithm does not require the first condition. Furthermore, to ensure that our algorithm only depends on log ℓ, it suffices that there only be ≤ℓ near maximizers, which is substantially less restrictive than the ℓ-bounded growth condition. As mentioned earlier, we avoid range-dependence with an algorithm that finds and optimizes over near maximizers of f(·, D). We next specify what we mean by near maximizers using a notion of margin. 3 The Large Margin Mechanism We now our new algorithm for private maximization, called the large margin mechanism, along with its privacy and utility guarantees. 3.1 Margins We first introduce the notion of margin on which our algorithm is based. Given an instance of the private maximization problem and a positive integer ℓ∈N, let f (ℓ)(D) denote the ℓ-th highest value of f(·, D). We adopt the convention that f (K+1)(D) = −∞. Condition 1 ((ℓ, γ)-margin condition). For any ℓ∈N and γ > 0, we say a dataset D ∈X n satisfies the (ℓ, γ)-margin condition if f (ℓ+1)(D) < f (1)(D) −γ (i.e., there are at most ℓitems within γ of the top item according to f(·, D)).1 By convention, every dataset satisfies the (K, γ)-margin condition. Intuitively, a (ℓ, γ)-margin condition with a relatively large γ implies that there are ≤ℓnear maximizers, so the private maximization problem is easier when D satisfies an (ℓ, γ)-margin condition with small ℓ. How large should γ be for a given ℓ? The following lower bound suggests that in order to have n = O(log(ℓ)/α), we need γ to be roughly log(ℓ)/(αn). Theorem 1 (Lower bound for approximate differential privacy). Fix any α ∈(0, 1), ℓ> 1, and δ ∈ [0, (1 −exp(−α))/(2(ℓ−1))]; and assume n ≥2. Let A be any (α, δ)-approximate differentially private algorithm, and γ := min{1/2, log((ℓ−1)/2)/(nα)}. There exists a domain X, a function f : U ×X n →R such that f(i, ·) is (1/n)-Lipschitz for all i ∈U, and a dataset D ∈X n such that: 1. D satisfies the (ℓ, γ)-margin condition. 2. Pr f(A(D), D) > f (1)(D) −γ < 1 2. A consequence of Theorem 1 is that complete range-independence for all (1/n)-Lipschitz functions f is not possible, even with approximate differential privacy. For instance, if D satisfies an (ℓ, Ω(log(ℓ)/(αn)))-margin condition only when ℓ= Ω(K), then n must be Ω(log(K)/α) in order for an approximate differentially private algorithm to be useful. 3.2 Algorithm The lower bound in Theorem 1 suggests the following algorithm. First, privately determine a pair (ℓ, γ), with ℓis as small as possible and γ = Ω(log(ℓ)/(αn)), such that D satisfies the (ℓ, γ)-margin 1Our notion of margins here is different from the usual notion of margins from statistical learning that underlies linear prediction methods like support vector machines and boosting. In fact, our notion is more closely related to the shell decomposition bounds of [26], which we discuss in Section 4.2. 4 Algorithm 1 The large margin mechanism LMM(α, δ, D) input Privacy parameters α > 0 and δ ∈(0, 1), database D ∈X n. output Item I ∈U. 1: For each r = 1, 2, . . . , K, let t(r) := 6 n 1 + ln(3r/δ) α = O 1 n + 1 nα log r δ , T (r) := 3 nα ln 3 2δ + 6 nα ln 3 δ + 12 nα ln 3r(r + 1) δ + t(r) = O 1 n + 1 nα log r δ . 2: Draw Z ∼Lap(3/α). 3: Let m := f (1)(D) + Z/n. {Estimate of max value.} 4: Draw G ∼Lap(6/α) and Z1, Z2, . . . , ZK−1 iid∼Lap(12/α). 5: Let ℓ:= 1. {Adaptively determine value ℓsuch that D satisfies (ℓ, t(ℓ))-margin condition.} 6: while ℓ< K do 7: if m −f (ℓ+1)(D) > (Zℓ+ G)/n + T (ℓ) then 8: Break out of while-loop with current value of ℓ. 9: else 10: Let ℓ:= ℓ+ 1. 11: end if 12: end while 13: Let Uℓbe the set of ℓitems in U with highest f(i, D) value (ties broken arbitrarily). 14: Draw I ∼p where pi ∝1{i ∈Uℓ} exp(nαf(i, D)/6). {Exponential mechanism on top ℓ items.} 15: return I. condition. Then, run the exponential mechanism on the set Uℓ⊆U of items with the ℓhighest f(·, D) values. This sounds rather natural and simple, but a knee-jerk reaction to this approach is that the set Uℓitself depends on the sensitive dataset D, and it may have high sensitivity in the sense that membership of many items in Uℓcan change when a single individual’s private value is changed. Thus differentially private computation of Uℓappears challenging. It turns out we do not need to guarantee the privacy of the set Uℓ, but rather just of a valid (ℓ, γ) pair. This is essentially because when D satisfies the (ℓ, γ)-margin condition, the probability that the exponential mechanism picks an item i that occurs in Uℓwhen the sensitive dataset is D but not in Uℓwhen the sensitive dataset is its neighbor D′ is very small. Moreover, we can find such a valid (ℓ, γ) pair using a differentially private search procedure based on the sparse vector technique [22]. Combining these ideas gives a general (and adaptive) algorithm whose loss of utility due to privacy is only O(log(ℓ/δ)/αn) when the dataset satisfies a (ℓ, O(log(ℓ/δ)/(αn))-margin condition. We call this general algorithm the large margin mechanism (Algorithm 1), or LMM for short. 3.3 Privacy and Utility Guarantees We first show that LMM satisfies approximate differential privacy. Theorem 2 (Privacy guarantee). LMM(α, δ, ·) satisfies (α, δ)-approximate differential privacy. The proof of Theorem 2 is in Appendix A.1. The following theorem, proved in Appendix A.2, provides a guarantee on the utility of LMM. Theorem 3 (Utility guarantee). Pick any η ∈(0, 1). Suppose D ∈X n satisfies the (ℓ∗, γ∗)-margin condition with γ∗= 21 nα ln 3 η + T (ℓ∗). Then with probability at least 1 −η, I := LMM(α, δ, D) satisfies f(I, D) ≥f (1)(D) −6 ln(2ℓ∗/η) nα . 5 (Above, T (ℓ∗) is as defined in Algorithm 1.) Remark 1. Fix some α, δ ∈(0, 1). Theorem 3 states that if the dataset D satisfies the (ℓ∗, γ∗)margin condition, for some positive integer ℓ∗and γ∗= C log(ℓ∗/δ)/(nα) for some universal constant C > 0, then the value f(I, D) of the item I returned by LMM is within O(log(ℓ∗)/(nα)) of the maximum, with high probability. There is no explicit dependence on the cardinality K of the universe U. 4 Illustrative Applications We now describe applications of LMM to problems from data mining and machine learning. 4.1 Private Frequent Itemset Mining Frequent Itemset Mining (FIM) is the following popular data mining problem: given the purchase lists of users (say, for an online grocery store), the goal is to find the sets of items that are purchased together most often. The work of [5] provides the first differentially private algorithms for FIM. However, as these algorithms rely on the exponential mechanism and the max-of-Laplaces mechanism, their utilities degrade with the total number of possible itemsets. Subsequent algorithms exploit other properties of itemsets or avoid directly finding the most frequent itemset [34, 27, 15, 8]. Let I be the set of items that can be purchased, and let B be the maximum length of an user’s purchase list. Let U ⊆2I be the family of itemsets of interest. For simplicity, we let U := I r — i.e., all itemsets of size r—and consider the problem of picking the itemset with the (approximately) highest frequency. This is a private maximization problem where D is the users’ lists of purchased items, and f(i, D) is the fraction of users who purchase an itemset i ∈U. Let fmax be the highest frequency of an itemset in D. Let L be the total number of itemsets with non-zero frequency, so L ≤n B r , which is ≪|I|r whenever B ≪|I|. Applying LMM gives the following guarantee. Corollary 1. Suppose we use LMM(α, δ, ·) on the FIM problem above. Then there exists a constant C > 0 such that the following holds. If fmax ≥C · log(L/δ)/(nα), then with probability ≥1 −δ, the frequency of the itemset ILMM output by LMM is f(ILMM, D) ≥fmax −O log(L/δ) nα . In contrast, the itemset IEM returned by the exponential mechanism is only guaranteed to satisfy f(IEM, D) ≥fmax −O r log(|I|/δ) nα , which is significantly worse than Corollary 1 whenever L ≪|I|r (as is typically the case). Second, to ensure differential privacy by running the exponential mechanism, one needs a priori knowledge of the set U (and thus the universe of items I) independently of the observed data; otherwise the process will not be end-to-end differentially private. In contrast, our algorithm does not need to know I in order to provide end-to-end differential privacy. Finally, unlike [31], our algorithm does not require a gap between the top two itemset frequencies. 4.2 Private PAC Learning We now consider private PAC learning with a finite hypothesis class H with bounded VC dimension d [25]. Here, the dataset D consists of n labeled training examples drawn iid from a fixed distribution. The error err(h) of a hypothesis h ∈H is the probability that it misclassifies a random example drawn from the same distribution. The goal is to return a hypothesis h ∈H with error as low as possible. A standard procedure that has been well-studied in the literature simply returns the minimizer ˆh ∈H of the empirical error c err(h, D) computed on the training data D, but this does not guarantee (approximate) differential privacy. The work of [25] instead uses the exponential mechanism to select a hypothesis hEM ∈H. With probability ≥1 −δ0, err(hEM) ≤min h∈H err(h) + O r d log(n/δ0) n + log |H| + log(1/δ0) αn ! . (2) 6 The dependence on log |H| is improved to d log |Σ| by [7] when the data entries come from a finite set Σ. The subsequent work of [4] introduces the notion of representation dimension, and shows how it relates to differentially private learning in the discrete and finite case, and [3] provides improved convergence bounds with approximate differential privacy that exploit the structure of some specific hypothesis classes. For the case of infinite hypothesis classes and continuous data distributions, [10] shows that distribution-free private PAC learning is not generally possible, but distribution-dependent learning can be achieved under certain conditions. We provide a sample complexity bound of a rather different character compared to previous work. Our bound only relies on uniform convergence properties of H, and can be significantly tighter than the bounds from [25] when the number of hypotheses with error close to minh∈H err(h) is small. Indeed, the bounds are a private analogue of the shell bounds of [26], which characterize the structure of the hypothesis class as a function of the properties of a decomposition based on hypotheses’ error rates. In many situation, these bounds are significantly tighter than those that do not involve the error distributions. Following [26], we divide the hypothesis class H into R = O( p n/(d log n)) shells; the t-th shell H(t) is defined by H(t) := ( h ∈H : err(h) ≤min h′∈H err(h′) + C0t r d log(n/δ0) n ) . Above, C0 > 0 is the constant from uniform convergence bounds—i.e., C0 is the smallest c > 0 such that for all h ∈H, with probability ≥1 −δ0, we have |c err(h, D) −err(h)| ≤c p d log(n/δ0)/n. Observe that H(t + 1) ⊆H(t); and moreover, with probability ≥1 −δ0, all h ∈H(t) have c err(h, D) ≤minh′∈H err(h′) + C0 · (t + 1) p d log(n/δ0)/n. Let t∗(n) as the smallest integer t ∈N such that log(|H(t + 1)|) + log(1/δ) t ≤C0α√dn log n C where C > 0 is the constant from Remark 1. Then, with probability ≥1 −δ0, the dataset D with f = 1−c err satisfies the (ℓ, γ)-margin condition, with ℓ= |H(t∗(n)+1)| and γ = C log(|H(t∗(n)+ 1)|/δ)/(αn). Therefore, we have the following guarantee for applying LMM to this problem. Corollary 2. Suppose we use LMM(α, δ, ·) on the learning problem above (with U = H and f = 1 −c err). Then, with probability ≥1 −δ0 −δ, the hypothesis hLMM returned by LMM satisfies err(hLMM) ≤min h∈H err(h) + O r d log(n/δ0) n + log(|H(t∗(n) + 1)|/δ) αn ! . The dependence on log |H| from (2) is replaced here by log(|H(t∗(n) + 1)|/δ), which can be vastly smaller, as discussed in [26]. 5 Additional Related Work There has been a large amount of work on differential privacy for a wide range of statistical and machine learning tasks over the last decade [6, 30, 13, 21, 33, 24, 1]; for overviews, see [18] and [29]. In particular, algorithms for the private maximization problem (and variants) have been used as subroutines in many applications; examples include PAC learning [25], principle component analysis [14], performance validation [12], and multiple hypothesis testing [32]. A separation between pure and approximate differential privacy has been shown in several previous works [19, 31, 3]. The first approximate differentially private algorithm that achieves a separation is the Propose-Test-Release (PTR) framework [19]. Given a function, PTR determines an upper bound on its local sensitivity at the input dataset through a search procedure; noise proportional to this upper bound is then added to the actual function value. We note that the PTR framework does not directly apply to our setting as the sensitivity is not generally defined for a discrete universe. In the context of private PAC learning, the work of [3] gives the first separation between pure and approximate differential privacy. In addition to using the algorithm from [31], they devise two 7 additional algorithmic techniques: a concave maximization procedure for learning intervals, and an algorithm for the private maximization problem under the ℓ-bounded growth condition discussed in Section 2.2. The first algorithm is specific to their problem and does not appear to apply to general private maximization problems. The second algorithm has a sample complexity bound of n = O(log(ℓ)/α) when the function f satisfies the ℓ-bounded growth condition. Lower bounds for approximate differential privacy have been shown by [7, 16, 11, 9], and the proof of our Theorem 1 borrows some techniques from [11]. 6 Conclusion and Future Work In this paper, we have presented the first general and range-independent algorithm for approximate differentially private maximization. The algorithm automatically adapts to the available large margin properties of the sensitive dataset, and reverts to worst-case guarantees when such properties are lacking. We have illustrated the applicability of the algorithm in two fundamental problems from data mining and machine learning; in future work, we plan to study other applications where rangeindependence is a substantial boon. Acknowledgments. We thank an anonymous reviewer for suggesting the simpler variant of LMM based on the exponential mechanism. (The original version of LMM used a max of truncated exponentials mechanism, which gives the same guarantees up to constant factors.) This work was supported in part by the NIH under U54 HL108460 and the NSF under IIS 1253942. References [1] Raef Bassily, Adam Smith, and Abhradeep Thakurta. Private empirical risk minimization, revisited. arXiv:1405.7085, 2014. [2] Amos Beimel, Shiva Prasad Kasiviswanathan, and Kobbi Nissim. Bounds on the sample complexity for private learning and private data release. In Theory of Cryptography, pages 437– 454. Springer, 2010. [3] Amos Beimel, Kobbi Nissim, and Uri Stemmer. Private learning and sanitization: Pure vs. approximate differential privacy. In RANDOM, 2013. [4] Amos Beimel, Kobbi Nissim, and Uri Stemmer. Characterizing the sample complexity of private learners. In ITCS, pages 97–110, 2013. [5] Raghav Bhaskar, Srivatsan Laxman, Adam Smith, and Abhradeep Thakurta. Discovering frequent patterns in sensitive data. In KDD, 2010. [6] A. Blum, C. Dwork, F. McSherry, and K. Nissim. Practical privacy: the SuLQ framework. In PODS, 2005. [7] Avrim Blum, Katrina Ligett, and Aaron Roth. A learning theory approach to noninteractive database privacy. Journal of the ACM, 60(2):12, 2013. [8] Luca Bonomi and Li Xiong. Mining frequent patterns with differential privacy. Proceedings of the VLDB Endowment, 6(12):1422–1427, 2013. [9] Mark Bun, Jonathan Ullman, and Salil Vadhan. Fingerprinting codes and the price of approximate differential privacy. In STOC, 2014. [10] Kamalika Chaudhuri and Daniel Hsu. Sample complexity bounds for differentially private learning. In COLT, 2011. [11] Kamalika Chaudhuri and Daniel Hsu. Convergence rates for differentially private statistical estimation. In ICML, 2012. [12] Kamalika Chaudhuri and Staal A Vinterbo. A stability-based validation procedure for differentially private machine learning. In Advances in Neural Information Processing Systems, pages 2652–2660, 2013. [13] Kamalika Chaudhuri, Claire Monteleoni, and Anand D. Sarwate. Differentially private empirical risk minimization. Journal of Machine Learning Research, 12:1069–1109, 2011. 8 [14] Kamalika Chaudhuri, Anand D. Sarwate, and Kaushik Sinha. Near-optimal differentially private principal components. In Advances in Neural Information Processing Systems, pages 998–1006, 2012. [15] Rui Chen, Noman Mohammed, Benjamin CM Fung, Bipin C Desai, and Li Xiong. Publishing set-valued data via differential privacy. In VLDB, 2011. [16] Anindya De. Lower bounds in differential privacy. In Ronald Cramer, editor, Theory of Cryptography, volume 7194 of Lecture Notes in Computer Science, pages 321–338. SpringerVerlag, 2012. [17] C. Dwork, F. McSherry, K. Nissim, and A. Smith. Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography, 2006. [18] Cynthia Dwork. Differential privacy: A survey of results. In Theory and Applications of Models of Computation, pages 1–19. Springer, 2008. [19] Cynthia Dwork and Jing Lei. Differential privacy and robust statistics. In Proceedings of the 41st annual ACM symposium on Theory of computing, pages 371–380. ACM, 2009. [20] Cynthia Dwork, Krishnaram Kenthapadi, Frank McSherry, Ilya Mironov, and Moni Naor. Our data, ourselves: Privacy via distributed noise generation. In EURO-CRYPT. 2006. [21] A. Friedman and A. Schuster. Data mining with differential privacy. In KDD, 2010. [22] Moritz Hardt and Guy N Rothblum. A multiplicative weights mechanism for privacypreserving data analysis. In FOCS, 2010. [23] Moritz Hardt and Kunal Talwar. On the geometry of differential privacy. In Proceedings of the 42nd ACM symposium on Theory of computing, pages 705–714. ACM, 2010. [24] Prateek Jain, Pravesh Kothari, and Abhradeep Thakurta. Differentially private online learning. In COLT, 2012. [25] Shiva Prasad Kasiviswanathan, Homin K Lee, Kobbi Nissim, Sofya Raskhodnikova, and Adam Smith. What can we learn privately? SIAM Journal on Computing, 40(3):793–826, 2011. [26] John Langford and David McAllester. Computable shell decomposition bounds. J. Mach. Learn. Res., 5:529–547, 2004. [27] Ninghui Li, Wahbeh Qardaji, Dong Su, and Jianneng Cao. Privbasis: frequent itemset mining with differential privacy. In VLDB, 2012. [28] Frank McSherry and Kunal Talwar. Mechanism design via differential privacy. In FOCS, 2007. [29] A.D. Sarwate and K. Chaudhuri. Signal processing and machine learning with differential privacy: Algorithms and challenges for continuous data. Signal Processing Magazine, IEEE, 30(5):86–94, Sept 2013. ISSN 1053-5888. doi: 10.1109/MSP.2013.2259911. [30] Adam Smith. Privacy-preserving statistical estimation with optimal convergence rates. In STOC, 2011. [31] Adam Smith and Abhradeep Thakurta. Differentially private feature selection via stability arguments, and the robustness of the lasso. In COLT, 2013. [32] Caroline Uhler, Aleksandra B. Slavkovic, and Stephen E. Fienberg. Privacy-preserving data sharing for genome-wide association studies. arXiv:1205.0739, 2012. [33] Larry Wasserman and Shuheng Zhou. A statistical framework for differential privacy. Journal of the American Statistical Association, 105(489):375–389, 2010. [34] Chen Zeng, Jeffrey F Naughton, and Jin-Yi Cai. On differentially private frequent itemset mining. In VLDB, 2012. 9
|
2014
|
133
|
5,218
|
Causal Inference through a Witness Protection Program Ricardo Silva Department of Statistical Science and CSML University College London ricardo@stats.ucl.ac.uk Robin Evans Department of Statistics University of Oxford evans@stats.ox.ac.uk Abstract One of the most fundamental problems in causal inference is the estimation of a causal effect when variables are confounded. This is difficult in an observational study because one has no direct evidence that all confounders have been adjusted for. We introduce a novel approach for estimating causal effects that exploits observational conditional independencies to suggest “weak” paths in a unknown causal graph. The widely used faithfulness condition of Spirtes et al. is relaxed to allow for varying degrees of “path cancellations” that will imply conditional independencies but do not rule out the existence of confounding causal paths. The outcome is a posterior distribution over bounds on the average causal effect via a linear programming approach and Bayesian inference. We claim this approach should be used in regular practice to complement other default tools in observational studies. 1 Contribution We provide a new methodology to bound the average causal effect (ACE) of a variable X on a variable Y . For binary variables, the ACE is defined as E[Y | do(X = 1)] −E[Y | do(X = 0)] = P(Y = 1 | do(X = 1)) −P(Y = 1 | do(X = 0)), (1) where do(·) is the operator of Pearl [14], denoting distributions where a set of variables has been intervened upon by an external agent. In the interest of space, we assume the reader is familiar with the concept of causal graphs, the basics of the do operator, and the basics of causal discovery algorithms such as the PC algorithm of Spirtes et al. [22]. We provide a short summary for context in Section 2 The ACE is in general not identifiable from observational data. We obtain upper and lower bounds on the ACE by exploiting a set of (binary) covariates, which we also assume are not effects of X or Y (justified by temporal ordering or other background assumptions). Such covariate sets are often found in real-world problems, and form the basis of most observational studies done in practice [21]. However, it is not obvious how to obtain the ACE as a function of the covariates. Our contribution modifies the results of Entner et al. [6], who exploit conditional independence constraints to obtain point estimates of the ACE, but give point estimates relying on assumptions that might be unstable in practice. Our modification provides a different interpretation of their search procedure, which we use to generate candidate instrumental variables [11]. The linear programming approach of Dawid [5] and Ramsahai [16] is then modified to generate bounds on the ACE by introducing constraints on some causal paths, motivated as relaxations of [6]. The new setup can be computationally expensive, so we introduce further relaxations to the linear program to generate novel symbolic bounds, and a fast algorithm that sidesteps the full linear programming optimization with some simple, message passing-like, steps. 1 U X Y U X Y W X Y U X Y W U X Y W U’ (a) (b) (c) (d) (e) Figure 1: (a) A generic causal graph where X and Y are confounded by some U. (b) The same system in (a) where X is intervened upon by an external agent. (c) A system where W and Y are independent given X. (d) A system where it is possible to use faithfulness to discover that U is sufficient to block all back-door paths between X and Y . (e) Here, U itself is not sufficient. Section 2 introduces the background of the problem and Section 3 our methodology. Section 4 discusses an analytical approximation of the main results, and a way by which this provides scalingup possibilities for the approach. Section 5 contains experiments with synthetic and real data. 2 Background: Instrumental Variables, Witnesses and Admissible Sets Assuming X is a potential cause of Y , but not the opposite, a cartoon of the causal system containing X and Y is shown in Figure 1(a). U represents the universe of common causes of X and Y . In control and policy-making problems, we would like to know what happens to the system when the distribution of X is overridden by some external agent (e.g., a doctor, a robot or an economist). The resulting modified system is depicted in Figure 1(b), and represents the family of distributions indexed by do(X = x): the graph in (a) has undergone a “surgery” that wipes out edges, as originally discussed by [22] in the context of graphical models. Notice that if U is observed in the dataset, then we can obtain the distribution P(Y = y | do(X = x)) by simply calculating P u P(Y = y | X = x, U = u)P(U = u) [22]. This was popularized by [14] as the back-door adjustment. In general P(Y = y | do(X = x)) can be vastly different from P(Y = y | X = x). The ACE is simple to estimate in a randomized trial: this follows from estimating the conditional distribution of Y given X under data generated as in Figure 1(b). In contrast, in an observational study [21] we obtain data generated by the system in Figure 1(a). If one believes all relevant confounders U have been recorded in the data then back-door adjustment can be used, though such completeness is uncommon. By postulating knowledge of the causal graph relating components of U, one can infer whether a measured subset of the causes of X and Y is enough [14, 23, 15]. Without knowledge of the causal graph, assumptions such as faithfulness [22] are used to infer it. The faithfulness assumption states that a conditional independence constraint in the observed distribution exists if and only if a corresponding structural independence exists in the underlying causal graph. For instance, observing the independence W ⊥⊥Y | X, and assuming faithfulness and the causal order, we can infer the causal graph Figure 1(c); in all the other graphs this conditional independence in not implied. We deduce that no unmeasured confounders between X and Y exist. This simple procedure for identifying chains W →X →Y is useful in exploratory data analysis [4], where a large number of possible causal relations X →Y are unquantified but can be screened using observational data before experiments are performed. The idea of using faithfulness is to be able to sometimes identify such quantities. Entner et al. [6] generalize the discovery of chain models to situations where a non-empty set of covariates is necessary to block all back-doors. Suppose W is a set of covariates which are known not to be effects of either X or Y , and we want to find an admissible set contained in W: a set of observed variables which we can use for back-door adjustment to get P(Y = y | do(X = x)). Entner’s “Rule 1” states the following: Rule 1: If there exists a variable W ∈W and a set Z ⊆W\{W} such that: (i) W \⊥⊥Y | Z (ii) W ⊥⊥Y | Z ∪{X}. then infer that Z is an admissible set. 2 A point estimate of the ACE can then be found using Z. Given that (W, Z) satisfies1 Rule 1, we call W a witness for the admissible set Z. The model in Figure 1(c) can be identified with Rule 1, where W is the witness and Z = ∅. In this case, a so-called Na¨ıve Estimator2 P(Y = 1 | X = 1) −P(Y = 1 | X = 0) will provide the correct ACE. If U is observable in Figure 1(d), then it can be identified as an admissible set for witness W. Notice that in Figure 1(a), taking U as a scalar, it is not possible to find a witness since there are no remaining variables. Also, if in Figure 1(e) our covariate set W is {W, U}, then no witness can be found since U ′ cannot be blocked. Hence, it is possible for a procedure based on Rule 1 to answer “I don’t know whether an admissible set exists” even when a back-door adjustment would be possible if one knew the causal graph. However, using the faithfulness assumption alone one cannot do better: Rule 1 is complete for non-zero effects without more information [6]. Despite its appeal, the faithfulness assumption is not without difficulties. Even if unfaithful distributions can be ruled out as pathological under seemingly reasonable conditions [13], distributions which lie close to (but not on) a simpler model may in practice be indistinguishable from distributions within that simpler model at finite sample sizes. To appreciate these complications, consider the structure in Figure 1(d) with U unobservable. Here W is randomized but X is not, and we would like to know the ACE of X on Y 3. W is sometimes known as an instrumental variable (IV), and we call Figure 1(d) the standard IV structure; if this structure is known, optimal bounds LIV ≤ACE ≤UIV can be obtained without further assumptions, using only observational data over the binary variables W, X and Y [1]. There exist distributions faithful to the IV structure but which at finite sample sizes may appear to satisfy the Markov property for the structure W →X →Y ; in practice this can occur at any finite sample size [20]. The true average causal effect may lie anywhere in the interval [LIV , UIV ] (which can be rather wide), and may differ considerably from the na¨ıve estimate appropriate for the simpler structure. While we emphasize that this is a ‘worst-case scenario’ analysis and by itself should not rule out faithfulness as a useful assumption, it is desirable to provide a method that gives greater control over violations of faithfulness. 3 Methodology: the Witness Protection Program The core of our idea is (i) to invert the usage of Entner’s Rule 1, so that pairs (W, Z) should provide an instrumental variable bounding method instead of a back-door adjustment; (ii) express violations of faithfulness as bounded violations of local independence; (iii) find bounds on the ACE using a linear programming formulation. Let (W, Z) be any pair found by a search procedure that decides when Rule 1 holds. W will play the role of an instrumental variable, instead of being discarded. A standard IV bounding procedure such as [1] can be used conditional on each individual value z of Z, then averaged over P(Z). The lack of an edge W →Y given Z can be justified by faithfulness (as W ⊥⊥Y | {X, Z}). For the same reason, there might be no (conditional) dependence between W and a possible unmeasured common parent of X and Y . However, assuming faithfulness itself is not interesting, as a back-door adjustment could be directly obtained. Allowing unconstrained dependencies induced by edges W →Y and (W, U) (any direction) is also a non-starter, as all bounds will be vacuous [16]. Consider instead the (partial) parameterization in Table 1 of the joint distribution of {W, X, Y, U}, where U is latent and not necessarily a scalar. For simplicity of presentation, assume we are conditioning everywhere on a particular value z of Z, but which we supress from our notation as this will not be crucial to developments in this Section. Under this notation, the ACE is given by η11P(W = 1) + η10P(W = 0) −η01P(W = 1) −η00P(W = 0). (2) 1The work in [6] aims also at identifying zero effects with a “Rule 2”. For simplicity we assume that the effect of interest was already identified as non-zero. 2Sometimes we use the word “estimator” to mean a functional of the probability distribution instead of a statistical estimator that is a function of samples of this distribution. Context should make it clear when we refer to an actual statistic or a functional. 3A classical example is in non-compliance: suppose W is the assignment of a patient to either drug or placebo, X is whether the patient actually took the medicine or not, and Y is a measure of health status. The doctor controls W but not X. This problem is discussed by [14] and [5]. 3 ζ⋆ yx.w ≡ P(Y = y, X = x | W = w, U) ζyx.w ≡ P U P(Y = y, X = x | W = w, U)P(U | W = w) = P(Y = y, X = x | W = w) η⋆ xw ≡ P(Y = 1 | X = x, W = w, U) ηxw ≡ P U P(Y = 1 | X = x, W = w, U)P(U | W = w) = P(Y = 1 | do(X = x), W = w) δ⋆ w ≡ P(X = 1 | W = w, U) δw ≡ P U P(X = x | W = w, U)P(U | W = w) = P(X = 1 | W = w). Table 1: A partial parameterization of a causal DAG model over some {U, W, X, Y }. Notice that such parameters cannot be functionally independent, and this is precisely what we will exploit. We now introduce the following assumptions, |η⋆ x1 −η⋆ x0| ≤ϵw (3) |η⋆ xw −P(Y = 1 | X = x, W = w)| ≤ϵy (4) |δ⋆ w −P(X = 1 | W = w)| ≤ϵx (5) βP(U) ≤P(U | W = w) ≤¯βP(U). (6) Setting ϵw = 0, β = ¯β = 1 recovers the standard IV structure. Further assuming ϵy = ϵx = 0 recovers the chain structure W →X →Y . Deviation from these values corresponds to a violation of faithfulness, as the premises of Rule 1 can only be satisfied by enforcing functional relationships among the conditional probability tables of each vertex. Using this parameterization in the case ϵy = ϵx = 1, β = ¯β = 1, Ramsahai [16], extending [5], used the following linear programming to obtain bounds on the ACE (for now, assume that ζyx.w and P(W = w) are known constants): 1. There is a 4-dimensional polytope where parameters {η⋆ xw} can take values: for ϵw = ϵy = 1, this is the unit hypercube [0, 1]4. Find the extreme points of this polytope (up to 12 points for the case where ϵw > 0). Do the same for {δ⋆ w}. 2. Find the extreme points of the joint space ζ⋆ yx.w by mapping them from the points in {δ⋆ w}× {η⋆ xw}, since ζ⋆ yx.w = (δ⋆ w)x(1 −δ⋆ w)(1−x)η⋆ xw. 3. Using the extreme points of the 12-dimensional joint space {ζ⋆ yx.w} × {η⋆ xw}, find the dual polytope of this space in terms of linear inequalities. Points in this polytope are convex combinations of {ζ⋆ yx.w} × {η⋆ xw}, shown by [5] to correspond to the marginalization over some arbitrary P(U). This results in contraints over {ζyx.w} × {ηxw}. 4. Maximize/minimize (2) with respect to {ηxw} subject to the constraints found in Step 3 to obtain upper/lower bounds on the ACE. Allowing for the case where ϵx < 1 or ϵy < 1 is just a matter changing the first step, where box constraints are set on each individual parameter as a function of the known P(Y = y, X = x | W = w), prior to the mapping in Step 2. The resulting constraints are now implicitly non-linear in P(Y = y, X = x | W = w), but at this stage this does not matter as they are treated as constants. To allow for the case β < 1 < ¯β, use exactly the same procedure, but substitute every occurrence of ζyx.w in the constraints by κyx.w ≡P U ζ⋆ yx.wP(U); notice the difference between κyx.w and ζyx.w. Likewise, substitute every occurrence of ηxw in the constraints by ωxw ≡P U η⋆ xwP(U). Instead of plugging in constants for the values of κyx.w and turning the crank of a linear programming solver, we first treat {κyx.w} (and {ωxw}) as unknowns, linking them to observables and ηxw by the constraints ζyx.w/¯β ≤κyx.w ≤ζyx.w/β, P yx κyx.w = 1 and ηxw/¯β ≤ωxw ≤ηxw/β. Finally, the method can be easily implemented using a package such as Polymake (http://www.poymake.org) or SCDD for R. More details are given in the Supplemental Material. In this paper, we will not discuss in detail how to choose the free parameters of the relaxation. Any choice of ϵw ≥0, ϵy ≥0, ϵx ≥0, 0 ≤β ≤1 ≤¯β is guaranteed to provide bounds that are at 4 input : Binary data matrix D; set of relaxation parameters θ; covariate index set W; cause-effect indices X and Y output: A list of pairs (witness, admissible set) contained in W L ←∅; for each W ∈W do for every admissible set Z ⊆W\{W} identified by W and θ given D do B ←posterior over upper/lowed bounds on the ACE as given by (W, Z, X, Y, D, θ); if there is no evidence in B to falsify the (W, Z, θ) model then L ←L ∪{B}; end end end return L Algorithm 1: The outline of the Witness Protection Program algorithm. least as conservative as the back-door adjusted point estimator of [6], which is always covered by the bounds. Background knowledge, after a user is suggested a witness and admissible set, can be used here. In Section 5 we experiment with a few choices of default parameters. To keep focus, in what follows we will discuss only computational aspects. We develop a framework for choosing relaxation parameters in the Supplemental, and expect to extend it in follow-up publications. As the approach provides the witness a degree of protection against faithfulness violations, using a linear program, we call this framework the Witness Protection Program (WPP). 3.1 Bayesian Learning The previous section treated ζyx.w and P(W = w) as known. A common practice is to replace them by plug-in estimators (and in the case of a non-empty admissible set Z, an estimate of P(Z) is also necessary). Such models can also be falsified, as the constraints generated are typically only supported by a strict subset of the probability simplex. In principle, one could fit parameters without constraints, and test the model by a direct check of satisfiability of the inequalities using the plug-in values. However, this does not take into account the uncertainty in the estimation. For the standard IV model, [17] discuss a proper way of testing such models in a frequentist sense. Our models can be considerably more complicated. Recall that constraints will depend on the extreme points of the {ζ⋆ yx.w} parameters. As implied by (4) and (5), extreme points will be functions of ζyx.w. Writing the constraints fully in terms of the observed distribution will reveal non-linear relationships. We approach the problem in a Bayesian way. We will assume first the dimensionality of Z is modest (say, 10 or less), as this is the case in most applications of faithfulness to causal discovery. We parameterize P(Y, X, W | Z) as a full 2 × 2 × 2 contingency table4. Given that the dimensionality of the problem is modest, we assign to each three-variate distribution P(Y, X, W | Z = z) an independent Dirichet prior for every possible assigment of Z, constrained by the inequalities implied by the corresponding polytopes. The posterior is also a 8-dimensional constrained Dirichlet distribution, where we use rejection sampling to obtain a posterior sample by proposing from the unconstrained Dirichlet. A Dirichlet prior can also be assigned to P(Z). Using a sample from the posterior of P(Z) and a sample (for each possible value z) from the posterior of P(Y, X, W | Z = z), we obtain a sample upper and lower bound for the ACE. The full algorithm is shown in Algorithm 1. The search procedure is left unspecified, as different existing approaches can be plugged in into this step. See [6] for a discussion. In Section 5 we deal with small dimensional problems only, using the brute-force approach of performing an exhaustive search for Z. In practice, brute-force can be still valuable by using a method such as discrete PCA [3] to reduce W\{W} to a small set of binary variables. To decide whether the premises in Rule 1 hold, we merely perform Bayesian model selection with the BDeu score [2] between the full graph {W →X, W →Y, X →Y } (conditional on Z) and the graph with the edge W →Y removed. Our 4That is, we allow for dependence between W and Y given {X, Z}, interpreting the decision of independence used in Rule 1 as being only an indicator of approximate independence. 5 ωxw ≥κ1x.w + LY U xw (κ0x′.w + κ1x′.w) (7) ωxw ≤1 −(κ0x.w′ −ϵw(κ0x.w′ + κ1x.w′))/U XU xw′ (8) ωxw −ωxw′U XU x′w ≤κ1x.w + ϵw(κ0x′.w + κ1x′.w) (9) ωxw + ωx′w −ωx′w′ ≥κ1x′.w + κ1x.w −κ1x′.w′ + κ1x.w′ −χxw′( ¯U + L + 2ϵw) + L (10) Table 2: Some of the algebraic bounds found by symbolic manipulation of linear inequalities. Notation: x, w ∈{0, 1}, x′ = 1 −x and w′ = 1 −w are the complementary values. LY U xw ≡max(0, P(Y = 1 |X = x, W = w) −ϵy), U Y U xw ≡min(1, P(Y = 1 |X = x, W = w) + ϵy); LXU xw ≡max(0, P(X = x |W = w) −ϵx), with U XU xw defined accordingly. Finally, ¯U ≡max{U Y U xw }, L ≡min{LY U xw } and χxw ≡κ1x.w + κ0x.w. Full set of bounds with proofs can be found in the Supplementary Material. “falsification test” in Step 5 is a simple and pragmatical one: our initial trial of rejection sampling proposes M samples, and if more than 95% of them are rejected, we take this as an indication that the proposed model provides a bad fit. The final result is a set of posterior distributions over bounds, possibly contradictory, which should be summarized as appropriate. Section 5 provides an example. 4 Algebraic Bounds and the Back-substitution Algorithm Posterior sampling is expensive within the context of Bayesian WPP: constructing the dual polytope for possibly millions of instantiations of the problem is time consuming, even if each problem is small. Moreover, the numerical procedure described in Section 3 does not provide any insight on how the different free parameters {ϵw, ϵy, ϵx, β, ¯β} interact to produce bounds, unlike the analytical bounds available in the standard IV case. [16] derives analytical bounds under (3) given a fixed, numerical value of ϵw. We know of no previous analytical bounds as an algebraic function of ϵw. In the Supplementary Material, we provide a series of algebraic bounds as a function of our free parameters. Due to limited space, we show only some of the bounds in Table 2. They illustrate qualitative aspects of our free parameters. For instance, if ϵy = 1 and β = ¯β = 1, then LY U xw = 0 and (7) collapses to ηxw ≥ζ1x.w, one of the original relations found by [1] for the standard IV model. Decreasing ϵy will linearly increase LY U xw , tightening the corresponding lower bound in (7). If also ϵw = 0 and ϵx = 1, from (8) it follows ηxw ≤1 −ζ0x.w′. Equation (3) implies ωx′w −ωx′w′ ≤ϵw, and as such by setting ϵw = 0 we have that (10) implies ηxw ≥η1x.w + η1x.w′ −η1x′.w′ −η0x.w′, one of the most complex relationships in [1]. Further geometric intuition about the structure of the binary standard IV model is given by [19]. These bounds are not tight, in the sense that we opted not to fully exploit all possible algebraic combinations for some results, such as (10): there we use L ≤η⋆ xw ≤¯U and 0 ≤δ⋆ w ≤1 instead of all possible combinations resulting from (4) and (5). The proof idea in the Supplementary Material can be further refined, at the expense of clarity. Because our derivation is a further relaxation, the implied bounds are more conservative (i.e., wider). Besides providing insight on the structure of the problem, this gives a very efficient way of checking whether a proposed parameter vector {ζ⋆ yx.w} is valid, as well as finding the bounds: use backsubstitution on the symbolic set of constraints to find box constraints Lxw ≤ωxw ≤Uxw. The proposed parameter will be rejected whenever an upper bound is smaller than a lower bound, and (2) can be trivially optimized conditioning only on the box constraints—this is yet another relaxation, added on top of the ones used to generate the algebraic inequalities. We initialize by intersecting all algebraic box constraints (of which (7) and (8) are examples); next we refine these by scanning relations ±ωxw −aωxw′ ≤c such as (9) in lexicographical order, and tightening the bounds of ωxw using the current upper and lower bounds on ωxw′ where possible. We then identify constraints Lxww′ ≤ωxw −ωxw′ ≤Uxww′ starting from −ϵw ≤ωxw −ωxw′ ≤ϵw and the existing bounds, and plug into relations ±ωxw + ωx′w −ωx′w′ ≤c (as exemplified by (10)) to get refined bounds on ωxw as functions of (Lx′ww′, Ux′ww′). We iterate this until convergence, which is guaranteed since bounds never widen at any iteration. This back-substitution of inequalities follows the spirit 6 of message-passing and it is an order of magnitude more efficient than the fully numerical solution, while not increasing the width of the bounds by too much. In the Supplementary Material we provide evidence for this claim. In our experiments in Section 5, the back-substitution method was used in the testing stage of WPP. After collecting posterior samples, we calculated the posterior expected value of the contingency tables and run the numerical procedure to obtain the final tight bound5. 5 Experiments We describe a set of synthetic studies, followed by one study with the influenza data discussed by [9, 18]. In the synthetic study setup, we compare our method against NE1 and NE2, two na¨ıve point estimators defined by back-door adjustment on the whole of W and on the empty set, respectively. The former is widely used in practice, even when there is no causal basis for doing so [15]. The point estimator of [6], based solely on the faithfulness assumption, is also assessed. We generate problems where conditioning on the whole set W is guaranteed to give incorrect estimates6. Here, |W| = 8. We analyze two variations: one where it is guaranteed that at least one valid witness × admissible set pair exists; in the other, latent variables in the graph are common parents also of X and Y , so no valid witness exists. We divide each variation into two subcases: in the first, “hard” subcase, parameters are chosen (by rejection sampling) so that NE1 has a bias of at least 0.1 in the population; in the second, no such selection exists, and as such our exchangeable parameter sampling scheme makes the problem relatively easy. We summarize each WPP bound by the posterior expected value of the lower and upper bounds. In general WPP returns more than one bound: we select the upper/lower bound corresponding to the (W, Z) pair where the sum of BDeu scores for W \⊥⊥Y | Z and W ⊥⊥Y | Z ∪{X} is highest. Our main evaluation metric for an estimate is the Euclidean distance (henceforth, “error”) between the true ACE and the closed point in the given estimate, whether the estimate is a point or an interval. For methods that provide point estimates (NE1, NE2, and faithfulness), this means just the absolute value of the difference between the true ACE and the estimated ACE. For WPP, the error of the interval [L, U] is zero if the true ACE lies in this interval. We report error average and error tail mass at 0.1, the latter meaning the proportion of cases where the error exceeds 0.1. The comparison is not straightforward, since the trivial interval [−1, 1] will always have zero bias according to this definition. This is a trade-off, to be set according to an agreed level of information loss, measured by the width of the resulting intervals. This is discussed in the Supplemental. We run simulations at two levels of parameters: β = 0.9, ¯β = 1.1, and the same configuration except for β = ¯β = 1. The former gives somewhat wide intervals. As Manski emphasizes [11], this is the price for making fewer assumptions. For the cases where no witness exists, Entner’s Rule 1 should theoretically report no solution. In [6], stringent thresholds for accepting the two conditions of Rule 1 are adopted. Instead we take a more relaxed approach, using a uniform prior on the hypothesis of independence, and a BDeu prior with effective sample size of 10. As such, due to the nature of our parameter randomization, almost always (typically > 90%) the method will propose at least one witness. Given this theoretical failure, for the problems where no exact solution exists, we assess how sensitive the methods are given conclusions taken from “approximate independencies” instead of exact ones. We simulate 100 datasets for each one of the four cases (hard case/easy case, with theoretical solution/without theoretical solution), 5000 points per dataset, 1000 Monte Carlo samples per decision. Results are summarized in Table 3 for the case ϵw = ϵx = ϵy = 0.2, β = 0.9, ¯β = 1.1. Notice 5Sometimes, however, the expected contingency table given by the back-substitution method would fall outside the feasible region of the fully specified linear program – this is expected to happen from time to time, as the analytical bounds are looser. In such a situation, we report the bounds given by the back-substitution samples. 6In detail: we generate graphs where W ≡{Z1, Z2, . . . , Z8}. Four independent latent variables L1, . . . , L4 are added as parents of each {Z5, . . . , Z8}; L1 is also a parent of X, and L2 a parent of Y . L3 and L4 are each randomly assigned to be a parent of either X or Y , but not both. {Z5, . . . , Z8} have no other parents. The graph over Z1, . . . , Z4 is chosen by adding edges uniformly at random according to the lexicographic order. In consequence using the full set W for back-door adjustment is always incorrect, as at least four paths X ←L1 →Zi ←L2 →Y are active for i = 5, 6, 7, 8. The conditional probabilities of a vertex given its parents are generated by a logistic regression model with pairwise interactions, where parameters are sampled according to a zero mean Gaussian with standard deviation 10 / number of parents. Parameter values are truncated so that all conditional probabilities are between 0.025 and 0.975. 7 Case (β = 1, ¯β = 1) NE1 NE2 Faith. WPP Width Hard/Solvable 0.12 1.00 0.02 0.03 0.05 0.05 0.01 0.01 0.24 Easy/Solvable 0.01 0.01 0.07 0.24 0.02 0.01 0.00 0.00 0.24 Hard/Unsolvable 0.16 1.00 0.20 0.88 0.19 0.95 0.07 0.25 0.24 Easy/Unsolvable 0.09 0.32 0.14 0.56 0.12 0.53 0.03 0.08 0.23 Table 3: Summary of the outcome of the synthetic studies. Each entry for particular method is a pair (bias average, bias tail mass at 0.1) of the respective methods, as explained in the main text. The last column is the median width of the WPP interval. In a similar experiment with β = 0.9, ¯β = 1.1, WPP achieves nearly zero error, with interval widths around 0.50. A much more detailed table for many other cases is provided in the Supplementary Material. that WPP is quite stable, while the other methods have strengths and weaknesses depending on the setup. For the unsolvable cases, we average over the approximately 99% of cases where some solution was reported—in theory, no conditional independences hold and no solution should be reported, but WPP shows empirical robustness for the true ACE in these cases. Our empirical study concerns the effect of influenza vaccination on a patient being hospitalized later on with chest problems. X = 1 means the patient got a flu shot, Y = 1 indicates the patient was hospitalized. A negative ACE therefore suggests a desirable vaccine. The study was originally discussed by [12]. Shots were not randomized, but doctors were randomly assigned to receive a reminder letter to encourage their patients to be inoculated, recorded as GRP. This suggests the standard IV model in Figure 1(d), with W = GRP and U unobservable. Using the bounds of [1] and observed frequencies gives an interval of [−0.23, 0.64] for the ACE. WPP could not validate GRP as a witness, instead returning as the highest-scoring pair the witness DM (patient had history of diabetes prior to vaccination) with admissible set composed of AGE (dichotomized at 60 years) and SEX. Here, we excluded GRP as a possible member of an admissible set, under the assumption that it cannot be a common cause of X and Y . Choosing ϵw = ϵy = ϵx = 0.2 and β = 0.9, ¯β = 1.1, we obtain the posterior expected interval [−0.10, 0.17]. This does not mean the vaccine is more likely to be bad (positive ACE) than good: the posterior distribution is over bounds, not over points, being completely agnostic about the distribution within the bounds. Notice that even though we allow for full dependence between all of our variables, the bounds are considerably stricter than in the standard IV model due to the weakening of hidden confounder effects postulated by observing conditional independences. Posterior plots and sensitivity analysis are included in the Supplementary Material; for further discussion see [18, 9]. 6 Conclusion Our model provides a novel compromise between point estimators given by the faithfulness assumptions and bounds based on instrumental variables. We believe such an approach should become a standard item in the toolbox of anyone who needs to perform an observational study. R code is available at http://www.homepages.ucl.ac.uk/∼ucgtrbd/wpp. Unlike risky Bayesian approaches that put priors directly on the parameters of the unidentifiable latent variable model P(Y, X, W, U | Z), the constrained Dirichlet prior does not suffer from massive sensitivity to the choice of hyperparameters, as discussed at length by [18] and the Supplementary Material. By focusing on bounds, WPP keeps inference more honest, providing a compromise between a method purely based on faithfulness and purely theory-driven analyses that overlook competing models suggested by independence constraints. As future work, we will look at a generalization of the procedure beyond relaxations of chain structures W →X →Y . Much of the machinery here developed, including Entner’s Rules, can be adapted to the case where causal ordering is unknown: the search for “Y-structures” [10] generalizes the chain structure search to this case. Also, we will look into ways on suggesting plausible values for the relaxation parameters, already touched upon in the Supplementary Material. Finally, the techniques used to derive the symbolic bounds in Section 4 may prove useful in a more general context and complement other methods to find subsets of useful constraints such as the information theoretical approach of [8] and the graphical approach of [7]. Acknowledgements. We thank McDonald, Hiu and Tierney for their flu vaccine data, and the anonymous reviewers for their valuable feedback. 8 References [1] A. Balke and J. Pearl. Bounds on treatment effects from studies with imperfect compliance. Journal of the American Statistical Association, pages 1171–1176, 1997. [2] W. Buntine. Theory refinement on Bayesian networks. Proceedings of the 7th Conference on Uncertainty in Artificial Intelligence (UAI1991), pages 52–60, 1991. [3] W. Buntine and A. Jakulin. Applying discrete PCA in data analysis. Proceedings of 20th Conference on Uncertainty in Artificial Intelligence (UAI2004), pages 59–66, 2004. [4] L. Chen, F. Emmert-Streib, and J. D. Storey. Harnessing naturally randomized transcription to infer regulatory relationships among genes. Genome Biology, 8:R219, 2007. [5] A.P. Dawid. Causal inference using influence diagrams: the problem of partial compliance. In P.J. Green, N.L. Hjort, and S. Richardson, editors, Highly Structured Stochastic Systems, pages 45–65. Oxford University Press, 2003. [6] D. Entner, P. Hoyer, and P. Spirtes. Data-driven covariate selection for nonparametric estimation of causal effects. JMLR W&CP: AISTATS 2013, 31:256–264, 2013. [7] R. Evans. Graphical methods for inequality constraints in marginalized DAGs. Proceedings of the 22nd Workshop on Machine Learning and Signal Processing, 2012. [8] P. Geiger, D. Janzing, and B. Sch¨olkopf. Estimating causal effects by bounding confounding. Proceedings of the 30th Conference on Uncertainty in Artificial Intelligence, pages 240–249, 2014. [9] K. Hirano, G. Imbens, D. Rubin, and X.-H. Zhou. Assessing the effect of an inuenza vaccine in an encouragement design. Biometrics, 1:69–88, 2000. [10] S. Mani, G. Cooper, and P. Spirtes. A theoretical study of Y structures for causal discovery. Proceedings of the 22nd Conference on Uncertainty in Artificial Intelligence (UAI2006), pages 314–323, 2006. [11] C. Manski. Identification for Prediction and Decision. Harvard University Press, 2007. [12] C. McDonald, S. Hiu, and W. Tierney. Effects of computer reminders for influenza vaccination on morbidity during influenza epidemics. MD Computing, 9:304–312, 1992. [13] C. Meek. Strong completeness and faithfulness in Bayesian networks. Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence (UAI1995), pages 411–418, 1995. [14] J. Pearl. Causality: Models, Reasoning and Inference. Cambridge University Press, 2000. [15] J. Pearl. Myth, confusion, and science in causal analysis. UCLA Cognitive Systems Laboratory, Technical Report (R-348), 2009. [16] R. Ramsahai. Causal bounds and observable constraints for non-deterministic models. Journal of Machine Learning Research, pages 829–848, 2012. [17] R. Ramsahai and S. Lauritzen. Likelihood analysis of the binary instrumental variable model. Biometrika, 98:987–994, 2011. [18] T. Richardson, R. Evans, and J. Robins. Transparent parameterizatios of models for potential outcomes. In J. Bernardo, M. Bayarri, J. Berger, A. Dawid, D. Heckerman, A. Smith, and M. West, editors, Bayesian Statistics 9, pages 569–610. Oxford University Press, 2011. [19] T. Richardson and J. Robins. Analysis of the binary instrumental variable model. In R. Dechter, H. Geffner, and J.Y. Halpern, editors, Heuristics, Probability and Causality: A Tribute to Judea Pearl, pages 415–444. College Publications, 2010. [20] J. Robins, R. Scheines, P. Spirtes, and L. Wasserman. Uniform consistency in causal inference. Biometrika, 90:491–515, 2003. [21] P. Rosenbaum. Observational Studies. Springer-Verlag, 2002. [22] P. Spirtes, C. Glymour, and R. Scheines. Causation, Prediction and Search. Cambridge University Press, 2000. [23] T. VanderWeele and I. Shpitser. A new criterion for confounder selection. Biometrics, 64:1406–1413, 2011. 9
|
2014
|
134
|
5,219
|
Self-Adaptable Templates for Feature Coding Xavier Boix1,2∗ Gemma Roig1,2∗ Salomon Diether1 Luc Van Gool1 1Computer Vision Laboratory, ETH Zurich, Switzerland 2LCSL, Massachusetts Institute of Technology & Istituto Italiano di Tecnologia, Cambridge, MA {xboix,gemmar}@mit.edu {boxavier,gemmar,sdiether,vangool}@vision.ee.ethz.ch Abstract Hierarchical feed-forward networks have been successfully applied in object recognition. At each level of the hierarchy, features are extracted and encoded, followed by a pooling step. Within this processing pipeline, the common trend is to learn the feature coding templates, often referred as codebook entries, filters, or over-complete basis. Recently, an approach that apparently does not use templates has been shown to obtain very promising results. This is the second-order pooling (O2P) [1, 2, 3, 4, 5]. In this paper, we analyze O2P as a coding-pooling scheme. We find that at testing phase, O2P automatically adapts the feature coding templates to the input features, rather than using templates learned during the training phase. From this finding, we are able to bring common concepts of coding-pooling schemes to O2P, such as feature quantization. This allows for significant accuracy improvements of O2P in standard benchmarks of image classification, namely Caltech101 and VOC07. 1 Introduction Many object recognition schemes, inspired from biological vision, are based on feed-forward hierarchical architectures, e.g. [6, 7, 8]. In each level in the hierarchy, the algorithms can be usually divided into the steps of feature coding and spatial pooling. The feature coding extracts similarities between the set of input features and a set of templates (the so called filters, over-complete basis or codebook), and then, the similarity responses are transformed using some non-linearities. Finally, the spatial pooling extracts one single vector from the set of transformed responses. The specific architecture of the network (e.g. how many layers), and the specific algorithms for the coding-pooling at each layer are usually set for a recognition task and dataset, cf. [9]. Second-order Pooling (O2P) is an alternative algorithm to the aforementioned coding-pooling scheme. O2P has been introduced in medical imaging to analyze magnetic resonance images [1, 2], and lately, O2P achieved state-of-the-art in some of the traditional computer vision tasks [3, 4, 5, 10]. A surprising fact of O2P is that it is formulated without feature coding templates [5]. This is in contrast to the common coding-pooling schemes, in which the templates are learned during a training phase, and at testing phase, the templates remain fixed to the learned values. Motivated by the intriguing properties of O2P, in this paper we try to re-formulate O2P as a codingpooling scheme. In doing so, we find that O2P actually computes similarities to feature coding templates as the rest of the coding-pooling schemes. Yet, what remains uncommon of O2P, is that the templates are “recomputed” for each specific input, rather than being fixed to learned values. In O2P, the templates are self-adapted to the input, and hence, they do not require learning. From our formulation, we are able to bring common concepts of coding-pooling schemes to O2P, such as feature quantization. This allows us to achieve significant improvements of the accuracy ∗Both first authors contributed equally. 1 of O2P for image classification. We report experiments on two challenging benchmarks for image classification, namely Caltech101 [11], and VOC07 [12]. 2 Preliminaries In this Section, we introduce O2P as well as several coding-pooling schemes, and identify some common terminology in the literature. This will serve as a basis for the new formulation of O2P, that we introduce in the following section. The algorithms that we analyze in this section are usually part of a layer of a hierarchical network for object recognition. The input to these algorithms is a set of feature vectors that come from the output of the previous layer, or from the raw image. Let {xi}N be the set of input feature vectors to the algorithm, which is the set of N feature vectors, xi ∈RM, indexed by i ∈{1, . . . , N}. The output of the algorithm is a single vector, which we denote as y, and it may have a different dimensionality than the input vectors. In the following subsections, we present the algorithms and terminology of template-based methods, and then, we introduce the formulation of O2P that appears in the literature that apparently does not use templates. 2.1 Coding-Pooling based on Evaluating Similarities to Templates Template-based methods are build upon similarities between the input vectors and a set of templates. Depending on the terminology of each algorithm, the templates may be denoted as filters, codebook, or over-complete basis. From now on, we will refer to all of them as templates. We denote the set of templates as {bk ∈RM}P . In this paper, bk and the input feature vectors xi have the same dimensionality, M. The set of templates is fixed to learned values during the training phase. There are many possible learning algorithms, but analyzing them is not necessary here. The algorithms that are interesting for our purposes, start by computing a similarity measure between the input feature vectors {xi}N and the templates {bk}P . Let Γ(xi, bk) be the similarity function, which depends on each algorithm. We define γi as the vector that contains the similarities of xi to the set of templates {bk}, and γ ∈RM×P the matrix whose columns are the vectors γi, i.e. γki = Γ(xi, bk). (1) Once γ is computed, the algorithms that we analyze apply some non-linear transformation to γ, and then, the resulting responses are merged together, with the so called pooling operation. The pooling consists on generating one single response value for each template. We denote as gk(γ) the function that includes both the non-linear transformation and the pooling operation, where gk : RM×P →R. We include both operations in the same function, but in the literature it is usually presented as two separate steps. Finally, the output vector y is built using {gk(γ)}P , {bk}P and {xi}N, depending on the algorithm. It is also quite common to concatenate the outputs of neighboring regions to generate the final output of the layer. We now show how the presented terminology is applied to some methods based on evaluating similarities to templates, namely assignment-based methods and Fisher Vector. In the sequel, these algorithms will be a basis to reformulate O2P. Assignment-based Methods The popular Bag-of-Words and some of its variants fall into this category, e.g. [13, 14, 15]. These methods consist on assigning each input vector xi to a set of templates (the so called vector quantization), and then, building a histogram of the assignments, which corresponds to the average pooling operation. We now present them using our terminology. After computing the similarities to the templates, γ (usually based on ℓ2 distance), gk(γ) computes both the vector quantization and the pooling. Let s be the number of templates to which each input vector is assigned, and let γ′ i be the resulting assignment vector of xi (i.e. γ′ i is the result of applying vector quantisation on xi). γ′ i has s entries set to 1 and the rest to 0, that indicate the assignment. Finally, gk(γ) also computes the pooling for the assignments corresponding to the template k, i.e. gk(γ) = 1 N P i<N γ′ ki. The final output vector is the concatenation of the resulting pooling of the different templates, y = (g1(γ), . . . , gP (γ)). 2 Fisher Vectors It uses the first and second order statistics of the similarities between the features and the templates [16]. Fisher Vector builds two vectors for each template bk, which are Φ(1) k = 1 Ak X i<N γki (bk −xi) Φ(2) k = 1 Bk X i<N γki (bk −xi)2 −Ck , (2) where γki = 1 Zk exp −1 2(xi −bk)tDk(xi −bk) . (3) Ak, Bk, Ck are learned constants, Zk a normalization factor and Dk is a learned constant matrix of the model. Note that in Eq. (3), γki is a similarity between the feature vector xi and the template bk. The final output vector is y = (Φ(1) 1 , Φ(2) 1 . . . , Φ(1) P , Φ(2) P ). For further details we refer the reader to [16]. We use our terminology to do a very simple re-write of the terms. We define gk(γ) and bF k (we use the super-index F to indicate that are from Fisher vectors, and different from bk) as gk(γ) = ∥(Φ(1) k , Φ(2) k )∥2, bF k = 1 gk(γ)(Φ(1) k , Φ(2) k ). (4) We can see the templates of Fisher vectors, bF k , are obtained from computing some transformations to the original learned template bk, which involve the input set of features {xi}. gk(γ) is the norm of (Φ(1) k , Φ(2) k ), which gives an idea of the importance of each template in {xi}, similarly to gk(γ) in assignment-based methods. Note that bF k and gk(γ) are related to only one fixed template, bk. The final output vector becomes y = (g1(γ)bF 1 , . . . , gP (γ)bF P ). 2.2 Second-Order Pooling Second-order Pooling (O2P) was introduced in medical imaging to describe the voxels produced in diffusion tensor imaging [1], and to process tensor fields [2, 17]. O2P starts by building a correlation matrix from the set of feature (column) vectors {xi ∈RM}N, i.e. K = 1 N X i<N xixt i, (5) where xt i is the transpose vector of xi, and K ∈RM×M is a square matrix. K is a symmetric positive definite (SPD) matrix, and contains second-order statistics of {xi}. The set of SPD matrices form a Riemannian manifold, and hence, the conventional operations in the Euclidean space can not be used. Several metrics have been proposed for SPD matrices, and the most celebrated is the LogEuclidean metric [17]. Such metric consists of mapping the SPD matrices to the tangent space by using the logarithm of the matrix, log(K). In the tangent space, the standard Euclidean metrics can be used. The logarithm of an SPD matrix can be computed in practice by applying the logarithm individually to each of the eigenvalues of K [18]. Thus, the final output vector for O2P can be written as y = vec (log(K)) = vec X k<M log(λk)eket k ! , (6) where ek are the eigenvectors of K, and λk the corresponding eigenvalues. The vec(·) operator vectorizes log(K). In Eq. (6), apparently, there are no similarities to a set of templates. The absence of templates makes O2P look quite different from template-based methods. Recently, O2P achieved state-of-the-art results in some computer vision tasks, e.g. in object detection [3], semantic segmentation [5, 10], and for patch description [4]. Both reasons, motivates us to further analyze O2P in relation to template-based methods. 3 Self-Adaptability of the Templates In this section, we introduce a formulation that relates O2P and template-based methods. The new formulation is based on comparing two final representation vectors, rather than defining how the 3 final vector y is built. We denote ⟨yr, ys⟩as the inner product between yr and ys, which are the final representation vectors from two sets of input feature vectors, {xr i }N and {xs i}N, respectively, where we use the superscripts r and s to indicate the respective representation for each set. It will become clear during this section why we analyze ⟨yr, ys⟩instead of y. We divide the analysis in three subsections. In subsection 3.1, we re-write the formulation of the template-based methods of Section 2 with the inner product ⟨yr, ys⟩. In subsection 3.2, we do the same for O2P, and this unveils that O2P is also based on evaluating similarities to templates. In subsection 3.3, we analyze the characteristics of the templates in O2P, which have the particularity that are self-adapted to the input. 3.1 Re-Formulation of Template-Based Methods We re-write a generic formulation for the template-based methods described in Section 2 with the inner product between two final output vectors. The algorithms of Section 2 can be expressed as ⟨yr, ys⟩= X k<P X q<P gk(γr)gq(γs)S(br k, bs q), (7) where γki = Γ(xi, bk), and S(u, v) is a similarity function between the templates that depends on each algorithm. Recall that gk(γ) is a function that includes the non-linearities and the pooling of the similarities between the input feature vectors and the the templates. To see how Eq. (7) arises naturally from the algorithms of Section 2, we now analyze them in terms of this formulation. Assignment-Based Methods The inner product between two final output vectors can be written as ⟨yr, ys⟩=(g1(γr), . . . , gP (γr))t(gs 1(γs), . . . , gs P (γs)) = = X k<P gk(γr)gk(γs) = X k<P X q<P gk(γr)gq(γs)I(br k = bs q), (8) where the last step introduces an outer summation, and the indicator function I(·) eliminates the unnecessary cross terms. Comparing this last equation to Eq. (7), we can identify that S(br k, bs q) is the indicator function (returns 1 when br k = bs q, and 0 otherwise). Fisher Vectors The inner product between two final Fisher Vectors is ⟨yr, ys⟩=(g1(γr)brF 1 , . . . , gP (γr)brF P )t(g1(γs)bsF 1 , . . . , gP (γs)bsF P ) = X k<P X q<P gk(γr)gq(γs)I(br k = bs q)⟨brF k , bsF q ⟩. (9) The indicator function appears for the same reason as in Assignment-Based Methods. The final templates for each set of input vectors, brF k , bsF k , respectively, are compared with each other with the similarity (brF k )tbsF q . Thus, S(brF k , bsF q ) in Eq. (7) is equal to I(br k = bs q)(brF k )tbsF q . 3.2 O2P as Coding-Pooling based on Template Similarities We now re-formulate O2P, in the same way as we did for template-based methods in the previous subsection. This will allow relating O2P to template-based methods, and show that O2P also uses similarities to templates. We re-write the definition of O2P in Eq. (6) with ⟨yr, ys⟩. Using the property vec(A)tvec(B) = tr(AtB), where tr(·) is the trace function of a matrix, ⟨yr, ys⟩becomes (in the supplementary material we do the full derivation) ⟨yr, ys⟩= ⟨vec (log(Kr)) , vec (log(Ks))⟩= = X k<M X q<M log(λr k) log(λs q)⟨er k, es q⟩2, (10) where eket k is a square matrix, and the eigenvectors, {er k}M and {es k}M, are compared all against each other with ⟨er k, es q⟩2. Going back to the generic formulation of template-based methods in 4 Method S(br k, bs q) γki = Γ(xi, bk) templates gk(γ) Assignment-based I(br k = bs q) ⟨xi, bk⟩ fixed 1 N P i γ′ ki Fisher Vectors I(br k = bs q)⟨bsF k , bsF P ⟩ Eq. (3) fixed/adapted ∥(Φ(1) k , Φ(2) k )∥2 O2P ⟨br k, bs q⟩2 ⟨xi, bk⟩2 self-adapted log 1 N P i γki Table 1: Summary Table of the elements of our formulation for Assignment-based methods, Fisher Vectors and O2P. Eq. (7), we can see that the similarity function between the templates, S(er k, es q), can be identified in O2P as ⟨er k, es q⟩2. Also, note that in O2P the sums go over M, which is the number of eigenvectors, and in Eq. (7), go over P, which is the number of templates. Finally, gk(γ) in Eq. (7) corresponds to log(λk) in O2P. At this point, we have expressed O2P in a similar way as template-based methods. Yet, we still have to find the similarity between the input feature vectors and the templates. For that purpose, we use the definition of eigenvalues and eigenvectors, i.e. λkek = Kek, and also that tr(eket k) = 1 (the eigenvectors are orthonormal). Then, we can derive the following equivalence: λk = λktr(eket k) = tr(Keket k). Replacing K by 1 N P i xixt i, we find that the eigenvalues, λk, can be written using the similarity between the input vectors, xi, and the eigenvectors, ek: λk = 1 N X i tr((xixt i)(eket k)) = 1 N X i ⟨xi, ek⟩2. (11) Finally, we can integrate all the above derivations in Eq. (10), and we obtain that ⟨yr, ys⟩= X k<M X q<M gk(γr)gq(γs)⟨er k, es q⟩2, (12) where gk(γ) = log(λk) = log 1 N X i<N γki ! , (13) and γki = Γ(xi, ek) = ⟨xi, ek⟩2. (14) We can see by analyzing Eq. (12) that this equation takes the same form as the general equation of template-based methods in Eq. (7). Note that the eigenvectors take the same role as the set of templates, i.e. bk = ek and P = M. Also, observe that S(br k, bs q) is the square of the inner product between eigenvectors, Γ(xi, bk) is the square of the inner product between the input vectors and the eigenvectors, and the pooling operation is the logarithm of the average of the similarities. In Table 1 we summarize the corresponding elements of all the described methods. 3.3 Self-Adaptative Templates We define self-adaptative templates as templates that only depend on the input set of feature vectors, and are not fixed to predefined values. This is the case in O2P, because the templates in O2P correspond to the eigenvectors computed from the set of input feature vectors. The templates in O2P are not fixed to values learned during the training phase. Interestingly, the final templates in Fisher Vectors, bF k , are also partially self-adapted to the input vectors. Note that bF k are obtained by modifying the fixed learned templates, bk, with the input feature vectors. Finally, note that in O2P the number of templates is equal to the dimensionality of the input feature vectors. Thus, in O2P the number of templates can not be increased without changing the input vectors’ length, M. This begs the following question: do M templates allow for sufficient generalization for object recognition for any set of input vectors? We analyze this question in the next section. 4 Application: Quantization for O2P We observe in the experiments section that the performance of O2P degrades when the number of vectors in the set of input features increases. It is reasonable that M templates are not sufficient when the number of different vectors in {xi}N increases, specially when they are very different 5 Algorithm 1: Sparse Quantization in O2P Input: {xi}N, k Output: y foreach i = {1, . . . , N} do ˆxi ←Set k highest values of xi to its vector entry: xi, and the rest to 0 end K = 1 N P i ˆxiˆxt i y = vec(log(K)) from each other. We now introduce an algorithm to increase the robustness of O2P to the variability of the input vectors. We quantize the input feature vectors, {xi}, before computing O2P. Quantization may discard details, and hence, reduce the variability among vectors. In the experiments section it is reported that this allows preventing the degradation of performance in object recognition, when the number of input feature vectors increases. The quantization algorithm that we use is sparse quantization (SQ) [15, 19], because SQ does not change the dimensionality of the feature vector. Also, SQ is fast to compute, and does not increase the computational cost of O2P. Sparse Quantization for O2P For the quantization of {xi} we use SQ, which is a quantization to the set of k-sparse vectors. Let Rq k be the set of k-sparse vectors, i.e. {s ∈Rq : ∥s∥0 ≤k}. Also, we define Bq k = {0, 1}q k = {s ∈{0, 1}q : ∥s∥0 = k}, which is the set of binary vectors with k elements set to one and (q −k) set to zero. The cardinality of |Bq k| is equal to q k . The quantization of a vector v ∈Rq into a codebook {ci} is a mapping of v to the closest element in {ci}, i.e. ˆv⋆= arg minˆv∈{ci} ∥ˆv −v∥2, where ˆv⋆is the quantized vector v. In the case of SQ, the codebook {ci} contains the set of k-sparse vectors. These may be any of the previously introduced types: Rq k, Bq k. An important advantage of SQ over a general quantization is that it can be computed much more efficiently. The naive way to compute a general quantization is to evaluate the nearest neighbor of v in {ci}, which may be costly to compute for large codebooks and high-dimensional v. In contrast, SQ can be computed by selecting the k higher values of the set {vi}, i.e. for SQ into Rq k, ˆvi = vi if i is one of the k-highest entries of vector v, and 0 otherwise. For SQ into Bq k, the dimension indexed by the k-highest are set to 1 instead of vi, and 0 otherwise. (We refer the reader to [15, 19] for a more detailed explanation on SQ). In Algorithm 1 we depict the implementation of SQ in O2P, which highlights its simplicity. The computational cost of SQ is negligible compared to the cost of computing O2P. We use the set of k-sparse vectors in RM k for SQ, which worked best in practice, as shown in the following. 5 Experiments In this section, we analyze O2P in image classification from dense sampled SIFT descriptors. This setup is common in image classification, and it allows direct comparison to previous works on O2P. We report results on the Caltech101 [11] and VOC07 [12] datasets, using the standard evaluation benchmarks, which are the mean average precision accuracy across all classes. 5.1 Implementation Details We use the standard pipeline for image classification. We never use flipped or blurred images to extend the training set. Pipeline. For Caltech101, the image is re-sized to take a maximum height and width of 300 pixels, which is the standard resizing protocol for this dataset. For VOC07 the size of the images remains the same as the original. We extract SIFT [8] from patches on a regular grid, at different scales. In Caltech 101, we extract them at every 8 pixels and at the scales of 16, 32 and 48 pixels diameter. In VOC07, SIFT is sampled at each 4 pixels and at the scales of 12, 24 and 36 pixels diameter. O2P is computed using the SIFT descriptors as input, and using spatial pyramids. In 6 Caltech101, we generate the pooling regions dividing the image in 4 × 4, 2 × 2 and 1 × 1 regions, and in VOC07 in 3 × 1, 2 × 2 and 1 × 1 regions. To generate the final descriptor for the whole image, we concatenate the descriptors for each pooled region. We apply the power normalization to the final feature dimensions, sign(x)|x|3/4, that was shown to work well in practice [5]. Finally, we use a linear one-versus-rest SVM classifier for each class with the parameter C of the SVM set to 1000. We use the LIBLINEAR library for the SVM[20]. Other Feature Codings. As a sanity check of our results, we replace O2P with the Bag-ofWords [13] baseline, without changing any of the parameters. In Caltech101, we replace the average pooling of Bag-of-Words by max-pooling (without normalization) as it performs better. The codebook is learned by randomly picking a set of patches as codebook entries, which was shown to work well for the encodings we are evaluating [14]. We use a codebook of 8192 entries, since with more entries the performance does not increase significantly, but the computational cost does. 5.2 Results on Caltech101 We use 3 random splits of 30 images per class for training and the rest for testing. In Fig. 1a, results are shown for different spatial pyramid configurations, as well as different levels of quantization. Note that SQ with k = 128 is not introducing any quantization, as SIFT features are 128 dimensional vectors. Note that using SQ increases the performance more than 5% compared to when not using SQ (k = 128), when using only the first level of the pyramid. For the other levels of the pyramid, there is less improvement with SQ. This is in accordance with the observation that in smaller regions there are less SIFT vectors, the variability is smaller, and the limited amount of templates is able to better capture the meaningful information than in bigger regions. We can also see that for small k of SQ, the performance degrades due to the introduction of too much quantization. We also run experiments with Bag-of-Words with max-pooling (74.8%), and O2P without SQ (76.52%), and both of them are surpassed by O2P with SQ (78.63%). In [5], O2P accuracy is reported to be 79.2% with SIFT descriptor (we do not compare to their version of enriched SIFT, since all our experiments are with normal SIFT). We inspected the code of [5], and we found that the difference of accuracy mainly comes from using a more drastic resizing of the image, that takes a maximum of 100 pixels of width and height (usually in the literature it is 300 pixels). Note that resizing is another way of discarding information, and hence, O2P may benefit from that. We confirm this by resizing the image back to 300 pixels in [5]’s code, and the accuracy is 77.1%, similar to the one that we report without SQ in our code. The accuracy is not exactly the same due to differences in the SIFT parameters in [5]. Also, we tested SQ in [5]’s code with the resizing to a maximum of 100 pixels, and the accuracy increased to 79.45%, which is higher than reported in [5], and close to state-of-the-art results using SIFT descriptors (80.3%) [21]. 5.3 Results on VOC07 In Fig. 1b, we run the same experiment as in Caltech101. Note that the impact of SQ is even more evident than in Caltech101. In Table 2 we report the per-class accuracy, in addition to the mean average precision reported in Fig. 1b. We follow the evaluation procedure as described in [12]. With the full pyramid, when we use SQ the accuracy increases from 18.81% to 50.97%. In contrast to Caltech101, O2P with SQ performance is similar to our implementation of Bag-of-Words (51.14%). Thus, under adverse conditions for O2P, i.e. images with high variability such as in VOC07 and with a high number of input vectors, we can use SQ and obtain huge improvements of the O2P’s accuracy. The best reported results [22] in VOC07 are around 10% better than O2P with SQ, yet we obtain more than 30% improvement from the baseline. 6 Conclusions We found that O2P can be posed as a coding-pooling scheme based on evaluating similarities to templates. The templates of O2P self-adapt to the input, while the rest of the analyzed methods do not. In practice, our formulation was used to improve the performance of O2P in image classification. We are currently analyzing self-adaptative templates in deep hierarchical networks. 7 1 pyr. 1+2 pyr. 1+2+3 pyr. 1+2+3 pyr. w/o SQ SQ selected in val. set 5 20 40 60 80 100 128 0.55 0.6 0.65 0.7 0.75 0.8 Sparse Quantization Mean accuracy Caltech 101 76.52% 78.63% 75.55% 65.14% 5 20 40 60 80 100 128 0.1 0.2 0.3 0.4 0.5 Sparse Quantization Mean average precision PASCAL VOC 2007 18.81% 50.97% 49.09% 41.20% (a) (b) Figure 1: Results for different numbers of non-zero entries of SQ. Note that SQ at k = 128 is not introducing any quantization, since SIFT features are 128 dimensional vectors. (a) Caltech 101 (using 30 images per class for training), (b) VOC07. Aeroplane Bicycle Bird Boat Bottle Bus Car Cat Chair Cow Dinning Table Dog Horse Motorbike Person Potted Plant Sheep Sofa Train TV/Monitor Average 3 Pyr. O2P + SQ 72 53 45 63 23 51 69 52 50 35 44 41 74 56 78 19 35 50 67 45 50.97 3 Pyr. O2P w/o SQ 34 9 12 18 6 19 40 14 26 14 9 21 28 17 55 7 7 10 16 12 18.81 2 Pyr. O2P + SQ 71 50 41 62 20 50 68 47 47 33 41 37 69 56 74 18 36 51 66 44 49.09 1 Pyr. O2P + SQ 66 41 32 58 15 37 58 38 40 27 28 30 61 43 66 20 33 37 56 36 41.20 1 Pyr. O2P w/o SQ 21 7 11 9 6 8 29 10 22 4 7 12 12 8 49 6 5 7 9 9 12.53 Table 2: PASCAL VOC 2007 classification results. The average score provides the per-class average. We report results for O2P, with and without SQ, with the first plus second plus third levels of pyramids (3 Pyr.), O2P with SQ with the first plus second levels of pyramids (2 Pyr.), and O2P with and without SQ only with the first level of pyramids (1 Pyr.). Acknowledgments: We thank the ERC for support from AdG VarCity. References [1] D. Le Bihan, J.-F. Mangin, C. Poupon, C. A. Clark, S. Pappata, N. Molko, and H. Chabriat, “Diffusion tensor imaging: concepts and applications,” Journal of magnetic resonance imaging, 2001. [2] J. Weickert and H. Hagen, Visualization and Processing of Tensor Fields. Springer, 2006. [3] O. Tuzel, F. Porikli, and P. Meer, “Region covariance: A fast descriptor for detection and classification,” in ECCV, 2006. [4] P. Li and Q. Wang, “Local log-euclidean covariance matrix (L2ECM) for image representation and its applications,” in ECCV, 2012. [5] J. Carreira, R. Caseiro, J. Batista, and C. Sminchisescu, “Semantic segmentation with secondorder pooling,” in ECCV, 2012. [6] K. Fukushima, “Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position,” Biological cybernetics, 1980. [7] M. Riesenhuber and T. Poggio, “Hierarchical models of object recognition in cortex,” Nature neuroscience, 1999. [8] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” IJCV, 2004. [9] J. Bergstra, D. Yamins, and D. Cox, “Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision architectures,” in ICML, 2013. 8 [10] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in CVPR, 2014. [11] L. Fei-Fei, R. Fergus, and P. Perona, “One-shot learning of object categories,” TPAMI, 2006. [12] M. Everingham, L. Van Gool, C. Williams, J. Winn, and A. Zisserman, “The PASCAL visual object classes (VOC) challenge,” IJCV, 2010. [13] G. Csurka, C. R. Dance, L. Fan, J. Willamowski, and C. Bray, “Visual categorization with bags of keypoints,” in Workshop on Statistical Learning in Computer Vision, ECCV, 2004. [14] A. Coates and A. Ng, “The importance of encoding versus training with sparse coding and vector quantization,” in ICML, 2011. [15] X. Boix, G. Roig, and L. Van Gool, “Nested sparse quantization for efficient feature coding,” in ECCV, 2012. [16] J. Sanchez, F. Perronnin, T. Mensink, and J. Verbeek, “Image classification with the fisher vector: Theory and practice,” IJCV, 2013. [17] V. Arsigny, P. Fillard, X. Pennec, and N. Ayache, “Geometric means in a novel vector space structure on symmetric positive-definite matrices,” Journal on matrix analysis and applications, 2007. [18] R. Bhatia, Positive definite matrices. Princeton University Press, 2009. [19] X. Boix, M. Gygli, G. Roig, and L. Van Gool, “Sparse quantization for patch description,” in CVPR, 2013. [20] R. E. Fan, K. W. Chang, C. J. Hsieh, X. R. Wang, and C. J. Lin, “LIBLINEAR: A library for large linear classification,” JMLR, 2008. [21] O. Duchenne, A. Joulin, and J. Ponce, “A graph-matching kernel for object categorization,” in ICCV, 2011. [22] X. Zhou, K. Yu, T. Zhang, and T. S. Huang, “Image classification using super-vector coding of local image descriptors,” in ECCV, 2010. 9
|
2014
|
135
|
5,220
|
A Framework for Testing Identifiability of Bayesian Models of Perception Luigi Acerbi1,2 Wei Ji Ma2 Sethu Vijayakumar1 1 School of Informatics, University of Edinburgh, UK 2 Center for Neural Science & Department of Psychology, New York University, USA {luigi.acerbi,weijima}@nyu.edu sethu.vijayakumar@ed.ac.uk Abstract Bayesian observer models are very effective in describing human performance in perceptual tasks, so much so that they are trusted to faithfully recover hidden mental representations of priors, likelihoods, or loss functions from the data. However, the intrinsic degeneracy of the Bayesian framework, as multiple combinations of elements can yield empirically indistinguishable results, prompts the question of model identifiability. We propose a novel framework for a systematic testing of the identifiability of a significant class of Bayesian observer models, with practical applications for improving experimental design. We examine the theoretical identifiability of the inferred internal representations in two case studies. First, we show which experimental designs work better to remove the underlying degeneracy in a time interval estimation task. Second, we find that the reconstructed representations in a speed perception task under a slow-speed prior are fairly robust. 1 Motivation Bayesian Decision Theory (BDT) has been traditionally used as a benchmark of ideal perceptual performance [1], and a large body of work has established that humans behave close to Bayesian observers in a variety of psychophysical tasks (see e.g. [2, 3, 4]). The efficacy of the Bayesian framework in explaining a huge set of diverse behavioral data suggests a stronger interpretation of BDT as a process model of perception, according to which the formal elements of the decision process (priors, likelihoods, loss functions) are independently represented in the brain and shared across tasks [5, 6]. Importantly, such mental representations, albeit not directly accessible to the experimenter, can be tentatively recovered from the behavioral data by ‘inverting’ a model of the decision process (e.g., priors [7, 8, 9, 10, 11, 12, 13, 14], likelihood [9], and loss functions [12, 15]). The ability to faithfully reconstruct the observer’s internal representations is key to the understanding of several outstanding issues, such as the complexity of statistical learning [11, 12, 16], the nature of mental categories [10, 13], and linking behavioral to neural representations of uncertainty [4, 6]. In spite of these successes, the validity of the conclusions reached by fitting Bayesian observer models to the data can be questioned [17, 18]. A major issue is that the inverse mapping from observed behavior to elements of the decision process is not unique [19]. To see this degeneracy, consider a simple perceptual task in which the observer is exposed to stimulus s that induces a noisy sensory measurement x. The Bayesian observer reports the optimal estimate s∗that minimizes his or her expected loss, where the loss function L (s, ˆs) encodes the loss (or cost) for choosing ˆs when the real stimulus is s. The optimal estimate for a given measurement x is computed as follows [20]: s∗(x) = arg min ˆs Z qmeas(x|s)qprior(s)L (s, ˆs) ds (1) where qprior(s) is the observer’s prior density over stimuli and qmeas(x|s) the observer’s sensory likelihood (as a function of s). Crucially, for a given x, the solution of Eq. 1 is the same for any 1 triplet of prior qprior(s)·φ1(s), likelihood qmeas(x|s)·φ2(s), and loss function L (ˆs, s)·φ3(s), where the φi(s) are three generic functions such that Q3 i=1 φi(s) = c, for a constant c > 0. This analysis shows that the ‘inverse problem’ is ill-posed, as multiple combinations of priors, likelihoods and loss functions yield identical behavior [19], even before considering other confounding issues, such as latent states. If uncontrolled, this redundancy of solutions may condemn the Bayesian models of perception to a severe form of model non-identifiability that prevents the reliable recovery of model components, and in particular the sought-after internal representations, from the data. In practice, the degeneracy of Eq. 1 can be prevented by enforcing constraints on the shape that the internal representations are allowed to take. Such constraints include: (a) theoretical considerations (e.g., that the likelihood emerges from a specific noise model [21]); (b) assumptions related to the experimental layout (e.g., that the observer will adopt the loss function imposed by the reward system of the task [3]); (c) additional measurements obtained either in independent experiments or in distinct conditions of the same experiment (e.g., through Bayesian transfer [5]). Crucially, both (b) and (c) are under partial control of the experimenter, as they depend on the experimental design (e.g., choice of reward system, number of conditions, separate control experiments). Although several approaches have been used or proposed to suppress the degeneracy of Bayesian models of perception [12, 19], there has been no systematic analysis – neither empirical nor theoretical – of their effectiveness, nor a framework to perform such study a priori, before running an experiment. This paper aims to fill this gap for a large class of psychophysical tasks. Similar issues of model non-identifiability are not new to psychology [22], and generic techniques of analysis have been proposed (e.g., [23]). Here we present an efficient method that exploits the common structure shared by many Bayesian models of sensory estimation. First, we provide a general framework that allows a modeller to perform a systematic, a priori investigation of identifiability, that is the ability to reliably recover the parameters of interest, for a chosen Bayesian observer model. Second, we show how, by comparing identifiability within distinct ideal experimental setups, our framework can be used to improve experimental design. In Section 2 we introduce a novel class of observer models that is both flexible and efficient, key requirements for the subsequent analysis. In Section 3 we describe a method to efficiently explore identifiability of a given observer model within our framework. In Section 4 we show an application of our technique to two well-known scenarios in time perception [24] and speed perception [9]. We conclude with a few remarks in Section 5. 2 Bayesian observer model Here we introduce a continuous class of Bayesian observer models parametrized by vector θ. Each value of θ corresponds to a specific observer that can be used to model the psychophysical task of interest. The current model (class) extends previous work [12, 14] by encompassing any sensorimotor estimation task in which a one-dimensional stimulus magnitude variable s, such as duration, distance, speed, etc. is directly estimated by the observer. This is a fundamental experimental condition representative of several studies in the field (e.g., [7, 9, 12, 24, 14]). With minor modifications, the model can also cover angular variables such as orientation (for small errors) [8, 11] and multidimensional variables when symmetries make the actual inference space one-dimensional [25]. The main novel feature of the presented model is that it covers a large representational basis with a single parametrization, while still allowing fast computation of the observer’s behavior, both necessary requirements to permit an exploration of the complex model space, as described in Section 3. The generic observer model is constructed in four steps (Figure 1 a & b): 1) the sensation stage describes how the physical stimulus s determines the internal measurement x; 2) the perception stage describes how the internal measurement x is combined with the prior to yield a posterior distribution; 3) the decision-making stage describes how the posterior distribution and loss function guide the choice of an ‘optimal’ estimate s∗(possibly corrupted by lapses); and finally 4) the response stage describes how the optimal estimate leads to the observed response r. 2.1 Sensation stage For computational convenience, we assume that the stimulus s ∈R+ (the task space) comes from a discrete experimental distribution of stimuli si with frequencies Pi, with Pi > 0, P i Pi = 1 for 1 ≤i ≤Nexp. Discrete distributions of stimuli are common in psychophysics, and continu2 s x s∗ r t x minimize L(ˆt −t) lapse t∗ 1 −λ λ a. Generative model b. Internal model Sensation pmeas (x |s) Perception & Decision-making pest (s∗| x) Response preport (r |s∗) qprior (t) Perception Decision-making qmeas (x |t) Figure 1: Observer model. Graphical model of a sensorimotor estimation task, as seen from the outside (a), and from the subjective point of view of the observer (b). a: Objective generative model of the task. Stimulus s induces a noisy sensory measurement x in the observer, who decides for estimate s∗(see b). The recorded response r is further perturbed by reporting noise. Shaded nodes denote experimentally accessible variables. b: Observer’s internal model of the task. The observer performs inference in an internal measurement space in which the unknown stimulus is denoted by t (with t = f(s)). The observer either chooses the subjectively optimal value of t, given internal measurement x, by minimizing the expected loss, or simply lapses with probability λ. The observer’s chosen estimate t∗is converted to task space through the inverse mapping s∗= f −1(t∗). The whole process in this panel is encoded in (a) by the estimate distribution pest (s∗|x). ous distributions can be ‘binned’ and approximated up to the desired precision by increasing Nexp. Due to noise in the sensory systems, stimulus s induces an internal measurement x ∈R according to measurement distribution pmeas(x|s) [20]. In general, the magnitude of sensory noise may be stimulus-dependent in task space, in which case the shape of the likelihood would change from point to point – which is unwieldy for subsequent computations. We want instead to find a transformed space in which the scale of the noise is stimulus-independent and the likelihood translationally invariant [9] (see Supplementary Material). We assume that such change of variables is performed by a function f(s) : s →t that monotonically maps stimulus s from task space into t = f(s), which lives with x in an internal measurement space. We assume for f(s) the following parametric form: f(s) = A ln " 1 + s s0 d# + B with inverse f −1(t) = s0 dq e t−B A −1 (2) where A and B are chosen, without loss of generality, such that the discrete distribution of stimuli mapped in internal space, {f(si)} for 1 ≤i ≤Nexp, has range [−1, 1]. The parametric form of the sensory map in Eq. 2 can approximate both Weber-Fechner’s law and Steven’s law, for different values of base noise magnitude s0 and power exponent d (see Supplementary Material). We determine the shape of pmeas(x|s) with a maximum-entropy approach by fixing the first four moments of the distribution, and under the rather general assumptions that the sensory measurement is unimodal and centered on the stimulus in internal measurement space. For computational convenience, we express pmeas(x|s) as a mixture of (two) Gaussians in internal measurement space: pmeas(x|s) = πN x|f(s) + µ1, σ2 1 + (1 −π)N x|f(s) + µ2, σ2 2 (3) where N x|µ, σ2 is a normal distribution with mean µ and variance σ2 (in this paper we consider a two-component mixture but derivations easily generalize to more components). The parameters in Eq. 3 are partially determined by specifying the first four central moments: E [x] = f(s), Var[x] = σ2, Skew[x] = γ, Kurt[x] = κ; where σ, γ, κ are free parameters. The remaining degrees of freedom (one, for two Gaussians) are fixed by picking a distribution that satisfies unimodality and locally maximizes the differential entropy (see Supplementary Material). The sensation model represented by Eqs. 2 and 3 allows to express a large class of sensory models in the psychophysics literature, including for instance stimulus-dependent noise [9, 12, 24] and ‘robust’ mixture models [21, 26]. 2.2 Perceptual stage Without loss of generality, we represent the observer’s prior distribution qprior(t) as a mixture of M dense, regularly spaced Gaussian distributions in internal measurement space: qprior(t) = M X m=1 wmN t|µmin + (m −1)a, a2 a ≡µmax −µmin M −1 (4) 3 where wm are the mixing weights, a the lattice spacing and [µmin, µmax] the range in internal space over which the prior is defined (chosen 50% wider than the true stimulus range). Eq. 4 allows the modeller to approximate any observer’s prior, where M regulates the fine-grainedness of the representation and is determined by computational constraints (for all our analyses we fix M = 15). For simplicity, we assume that the observer’s internal representation of the likelihood, qmeas(x|t), is expressed in the same measurement space and takes again the form of a unimodal mixture of two Gaussians, Eq. 3, although with possibly different variance, skewness and kurtosis (respectively, ˜σ2, ˜γ and ˜κ) than the true likelihood. We write the observer’s posterior distribution as: qpost(t|x) = 1 Z qprior(t)qmeas(x|t) with Z the normalization constant. 2.3 Decision-making stage According to Bayesian Decision Theory (BDT), the observer’s ‘optimal’ estimate corresponds to the value of the stimulus that minimizes the expected loss, with respect to loss function L(t, ˆt), where t is the true value of the stimulus and ˆt its estimate. In general the loss could depend on t and ˆt in different ways, but for now we assume a functional dependence only on the stimulus difference in internal measurement space, ˆt −t. The (subjectively) optimal estimate is: t∗(x) = arg min ˆt Z qpost(t|x)L ˆt −t dt (5) where the integral on the r.h.s. represents the expected loss. We make the further assumption that the loss function is well-behaved, that is smooth, with a unique minimum at zero (i.e., the loss is minimal when the estimate matches the true stimulus), and with no other local minima. As before, we adopt a maximum-entropy approach and we restrict ourselves to the class of loss functions that can be described as mixtures of two (inverted) Gaussians: L(ˆt −t) = −πℓN ˆt −t|µℓ 1, σℓ 1 2 −(1 −πℓ)N ˆt −t|µℓ 2, σℓ 2 2 . (6) Although the loss function is not a distribution, we find convenient to parametrize it in terms of statistics of a corresponding unimodal distribution obtained by flipping Eq. 6 upside down: Mode [t′] = 0, Var [t′] = σ2 ℓ, Skew [t′] = γℓ, Kurt [t′] = κℓ; with t′ ≡ˆt −t. Note that we fix the location of the mode of the mixture of Gaussians so that the global minimum of the loss is at zero. As before, the remaining free parameter is fixed by taking a local maximum-entropy solution. A single inverted Gaussian already allows to express a large variety of losses, from a delta function (MAP strategy) for σℓ→0 to a quadratic loss for σℓ→∞(in practice, for σℓ≳1), and it has been shown to capture human sensorimotor behavior quite well [15]. Eq. 6 further extends the range of describable losses to asymmetric and more or less peaked functions. Crucially, Eqs. 3, 4, 5 and 6 combined yield an analytical expression for the expected loss that is a mixture of Gaussians (see Supplementary Material) that allows for a fast numerical solution [14, 27]. We allow the possibility that the observer may occasionally deviate from BDT due to lapses with probability λ ≥0. In the case of lapse, the observer’s estimate t∗is drawn randomly from the prior [11, 14]. The combined stochastic estimator with lapse in task space has distribution: pest(s∗|x) = (1 −λ) · δ s∗−f −1 (t∗(x)) + λ · qprior(s∗) |f ′(s∗)| (7) where f ′(s∗) is the derivative of the mapping in Eq. 2 (see Supplementary Material). 2.4 Response stage We assume that the observer’s response r is equal to the observer’s estimate corrupted by independent normal noise in task space, due to motor error and other residual sources of variability: preport(r|s∗) = N r|s∗, σ2 report(s∗) (8) where we choose a simple parameteric form for the variance: σ2 report(s) = ρ2 0 + ρ2 1s2, that is the sum of two independent noise terms (constant noise plus some noise that grows with the magnitude of the stimulus). In our current analysis we are interested in observer models of perception, so we do not explicitly model details of the motor aspect of the task and we do not include the consequences of response error into the decision making part of the model (Eq. 5). 4 Finally, the main observable that the experimenter can measure is the response probability density, presp(r|s; θ), of a response r for a given stimulus s and observer’s parameter vector θ [12]: presp(r|s; θ) = Z N r|s∗, σ2 report(s∗) pest(s∗|x)pmeas(x|s) ds∗dx, (9) obtained by marginalizing over unobserved variables (see Figure 1 a), and which we can compute through Eqs. 3–8. An observer model is fully characterized by parameter vector θ: θ = σ, γ, κ, s0, d, ˜σ, ˜γ, ˜κ, σℓ, γℓ, κℓ, {wm}M m=1 , ρ0, ρ1, λ . (10) An experimental design is specified by a reference observer model θ∗, an experimental distribution of stimuli (a discrete set of Nexp stimuli si, each with relative frequency Pi), and possibly a subset of parameters that are assumed to be equal to some a priori or experimentally measured values during the inference. For experiments with multiple conditions, an observer model typically shares several parameters across conditions. The reference observer θ∗represents a ‘typical’ observer for the idealized task under examination; its parameters are determined from pilot experiments, the literature, or educated guesses. We are ready now to tackle the problem of identifiability of the parameters of θ∗within our framework for a given experimental design. 3 Mapping a priori identifiability Two observer models θ and θ∗are a priori practically non-identifiable if they produce similar response probability densities presp(r|si; θ) and presp(r|si; θ∗) for all stimuli si in the experiment. Specifically, we assume that data are generated by the reference observer θ∗and we ask what is the chance that a randomly generated dataset D of a fixed size Ntr will instead provide support for observer θ. For one specific dataset D, a natural way to quantify support would be the posterior probability of a model given the data, Pr(θ|D). However, randomly generating a large number of datasets so as to approximate the expected value of Pr(θ|D) over all datasets, in the spirit of previous work on model identifiability [23], becomes intractable for complex models such as ours. Instead, we define the support for observer model θ, given dataset D, as its log likelihood, log Pr(D|θ). The log (marginal) likelihood is a widespread measure of evidence in model comparison, from sampling algorithms to metrics such as AIC, BIC and DIC [28]. Since we know the generative model of the data, Pr(D|θ∗), we can compute the expected support for model θ as: ⟨log Pr(D|θ)⟩= Z |D|=Ntr log Pr (D|θ) Pr (D|θ∗) dD. (11) The formal integration over all possible datasets with fixed number of trials Ntr yields: ⟨log Pr(D|θ)⟩= −Ntr Nexp X i=1 Pi · DKL (presp(r|si; θ∗)||presp(r|si; θ)) + const (12) where DKL(·||·) is the Kullback-Leibler (KL) divergence between two distributions, and the constant is an entropy term that does not affect our subsequent analysis, not depending on θ (see Supplementary Material for the derivation). Crucially, DKL is non-negative, and zero only when the two distributions are identical. The asymmetry of the KL-divergence captures the different status of θ∗and θ (that is, we measure differences only on datasets generated by θ∗). Eq. 12 quantifies the average support for model θ given true model θ∗, which we use as a proxy to assess model identifiability. As an empirical tool to explore the identifiability landscape, we define the approximate expected posterior density as: E (θ|θ∗) ∝e⟨log Pr(D|θ)⟩ (13) and we sample from Eq. 13 via MCMC. Clearly, E (θ|θ∗) is maximal for θ = θ∗and generally high for regions of the parameter space empirically close to the predictions of θ∗. Moreover, the peakedness of E(θ|θ∗) is modulated by the number of trials Ntr (the more the trials, the more information to discriminate between models). 4 Results We apply our framework to two case studies: the inference of priors in a time interval estimation task (see [24]) and the reconstruction of prior and noise characteristics in speed perception [9]. 5 Mean 0 10 0 10 0 10 0 1 0 1 0 0.5 1 1.5 0 1 SD 0 40 Skewness 0 1 Kurtosis 0 1 0 40 0 1 0 1 0 40 0 1 0 1 −1 0 1 2 0 1 −2 0 2 4 0 1 ms 600 800 0 10 ms 50 100 0 40 SRT 0 5 MAP 0 5 MTR ms 494 847 0 5 Prior BSL a. 0 5 BSL SRT MAP MTR 0.01 0.1 1 10 KL P ∗0.06 0.13 0.02 0.79 b. 0.06 0.08 0.1 0.12 BSL 0.10 σ 0.06 0.02 ρ1 c. σℓ d. 0 1 Figure 2: Internal representations in interval timing (Short condition). Accuracy of the reconstructed priors in the Short range; each row corresponds to a different experimental design. a: The first column shows the reference prior (thick red line) and the recovered mean prior ± 1 SD (black line and shaded area). The other columns display the distributions of the recovered central moments of the prior. Each panel shows the median (black line), the interquartile range (dark-shaded area) and the 95 % interval (light-shaded area). The green dashed line marks the true value. b: Box plots of the symmetric KL-divergence between the reconstructed priors and the prior of the reference observer. At top, the primacy probability P ∗of each setup having less reconstruction error than all the others (computed by bootstrap). c: Joint posterior density of sensory noise σ and motor noise ρ1 in setup BSL (gray contour plot; colored plots are marginal distributions). The parameters are anti-correlated, and discordant with the true value (star and dashed lines). d: Marginal posterior density for loss width parameter σℓ, suitably rescaled. 4.1 Temporal context and interval timing We consider a time interval estimation and reproduction task very similar to [24]. In each trial, the stimulus s is a time interval (e.g., the interval between two flashes), drawn from a fixed experimental distribution, and the response r is the reproduced duration (e.g., the interval between the second flash and a mouse click). Subjects perform in one or two conditions, corresponding to two different discrete uniform distributions of durations, either on a Short (494-847 ms) or a Long (847-1200 ms) range. Subjects are trained separately on each condition till they (roughly) learn the underlying distribution, at which point their performance is measured in a test session; here we only simulate the test sessions. We assume that the experimenter’s goal is to faithfully recover the observer’s priors, and we analyze the effect of different experimental designs on the reconstruction error. To cast the problem within our framework, we need first to define the reference observer θ∗. We make the following assumptions: (a) the observer’s priors (or prior, in only one condition) are smoothed versions of the experimental uniform distributions; (b) the sensory noise is affected by the scalar property of interval timing, so that the sensory mapping is logarithmic (s0 ≈0, d = 1); (c) we take average sensorimotor noise parameters from [24]: σ = 0.10, γ = 0, κ = 0, and ρ0 ≈0, ρ1 = 0.07; (d) for simplicity, the internal likelihood coincides with the measurement distribution; (e) the loss function in internal measurement space is almost-quadratic, with σℓ= 0.5, γℓ= 0, κℓ= 0; (f) we assume a small lapse probability λ = 0.03; (g) in case the observer performs in two conditions, all observer’s parameters are shared across conditions (except for the priors). For the inferred observer θ we allow all model parameters to change freely, keeping only assumptions (d) and (g). We compare the following variations of the experimental setup: 1. BSL: The baseline version of the experiment, the observer performs in both the Short and Long conditions (Ntr = 500 each); 2. SRT or LNG: The observer performs more trials (Ntr = 1000), but only either in the Short (SRT) or in the Long (LNG) condition; 6 3. MAP: As BSL, but we assume a difference in the performance feedback of the task such that the reference observer adopts a narrower loss function, closer to MAP (σℓ= 0.1); 4. MTR: As BSL, but the observer’s motor noise parameters ρ0, ρ1 are assumed to be known (e.g. measured in a separate experiment), and therefore fixed during the inference. We sample from the approximate posterior density (Eq. 13), obtaining a set of sampled priors for each distinct experimental setup (see Supplementary Material for details). Figure 2 a shows the reconstructed priors and their central moments for the Short condition (results are analogous for the Long condition; see Supplementary Material). We summarize the reconstruction error of the recovered priors in terms of symmetric KL-divergence from the reference prior (Figure 2 b). Our analysis suggests that the baseline setup BSL does a relatively poor job at inferring the observers’ priors. Mean and skewness of the inferred prior are generally acceptable, but for example the SD tends to be considerably lower than the true value. Examining the posterior density across various dimensions, we find that this mismatch emerges from a partial non-identifiability of the sensory noise, σ, and the motor noise, w1 (Figure 2 c).1 Limiting the task to a single condition with double number of trials (SRT) only slightly improves the quality of the inference. Surprisingly, we find that a design that encourages the observer to adopt a loss function closer to MAP considerably worsens the quality of the reconstruction in our model. In fact, the loss width parameter σℓis only weakly identifiable (Figure 2 d), with severe consequences for the recovery of the priors in the MAP case. Finally, we find that if we can independently measure the motor parameters of the observer (MTR), the degeneracy is mostly removed and the priors can be recovered quite reliably. Our analysis suggests that the reconstruction of internal representations in interval timing requires strong experimental constraints and validations [12]. This worked example also shows how our framework can be used to rank experimental designs by the quality of the inferred features of interest (here, the recovered priors), and to identify parameters that may critically affect the inference. Some findings align with our intuitions (e.g., measuring the motor parameters) but others may be nonobvious, such as the bad impact that a narrow loss function may have on the inferred priors within our model. Incidentally, the low identifiability of σℓthat we found in this task suggests that claims about the loss function adopted by observers in interval timing (see [24]), without independent validation, might deserve additional investigation. Finally, note that the analysis we performed is theoretical, as the effects of each experimental design are formulated in terms of changes in the parameters of the ideal reference observer. Nevertheless, the framework allows to test the robustness of our conclusions as we modify our assumptions about the reference observer. 4.2 Slow-speed prior in speed perception As a further demonstration, we use our framework to re-examine a well-known finding in visual speed perception, that observers have a heavy-tailed prior expectation for slow speeds [9, 29]. The original study uses a 2AFC paradigm [9], that we convert for our analysis into an equivalent estimation task (see e.g. [30]). In each trial, the stimulus magnitude s is speed of motion (e.g., the speed of a moving dot in deg/s), and the response r is the perceived speed (e.g., measured by interception timing). Subjects perform in two conditions, with different contrast levels of the stimulus, either High (cHigh = 0.5) or Low (cLow = 0.075), corresponding to different levels of estimation noise. Note that in a real speed estimation experiment subjects quickly develop a prior that depends on the experimental distribution of speeds [30] – but here we assume no learning of that kind in agreement with the underlying 2AFC task. Instead, we assume that observers use their ‘natural’ prior over speeds. Our goal is to probe the reliability of the inference of the slow-speed prior and of the noise characteristics of the reference observer (see [9]). We define the reference observer θ∗as follows: (a) the observer’s prior is defined in task space by a parametric formula: pprior(s) = (s2 + s2 prior)−kprior, with sprior = 1 deg/s and kprior = 2.4 [29]; (b) the sensory mapping has parameters s0 = 0.35 deg/s, d = 1 [29]; (c) the amount of sensory noise depends on the contrast level, as per [9]: σHigh = 0.2, σLow = 0.4, and γ = 0, κ = 0; (d) the internal likelihood coincides with the measurement distribution; (e) the loss function in internal measurement space is almost-quadratic, with σℓ= 0.5, γℓ= 0, κℓ= 0; (f) we assume a consider1This degeneracy is not surprising, as both sensory and motor noise of the reference observer θ∗are approximately Gaussian in internal measurement space (∼log task space). This lack of identifiability also affects the prior since the relative weight between prior and likelihood needs to remain roughly the same. 7 kprior 0 1 sprior 0 1 σHigh 0 5 10 σLow 0 5 ˜σHigh 0 5 10 ˜σLow 0 5 0 2 4 0 1 0 0.2 0.4 0 5 10 0.2 0.4 0.6 0 5 0 0.2 0.4 0 5 10 0.2 0.4 0.6 0 5 deg/s 0 1 2 0 1 deg/s 0.01 0.1 1 0 5 deg/s UNC 0.51 2 4 8 −10 0 Log prior STD a. −10 0 STD UNC 0.01 0.1 1 10 KL b. s0 c. 0 5 Figure 3: Internal representations in speed perception. Accuracy of the reconstructed internal representations (priors and likelihoods). Each row corresponds to different assumptions during the inference. a: The first column shows the reference log prior (thick red line) and the recovered mean log prior ± 1 SD (black line and shaded area). The other two columns display the approximate posteriors of kprior and sprior, obtained by fitting the reconstructed ‘non-parametric’ priors with a parametric formula (see text). Each panel shows the median (black line), the interquartile range (dark-shaded area) and the 95 % interval (light-shaded area). The green dashed line marks the true value. b: Box plots of the symmetric KL-divergence between the reconstructed and reference prior. c: Approximate posterior distributions for sensory mapping and sensory noise parameters. In experimental design STD, the internal likelihood parameters (˜σHigh, ˜σLow) are equal to their objective counterparts (σHigh, σLow). able amount of reporting noise, with ρ0 = 0.3 deg/s, ρ1 = 0.21; (g) we assume a contrast-dependent lapse probability (λHigh = 0.01, λLow = 0.05); (h) all parameters that are not contrast-dependent are shared across the two conditions. For the inferred observer θ we allow all model parameters to change freely, keeping only assumptions (d) and (h). We consider the standard experimental setup described above (STD), and an ‘uncoupled’ variant (UNC) in which we do not take the usual assumption that the internal representation of the likelihoods is coupled to the experimental one (so, ˜σHigh, ˜σLow, ˜γ and ˜κ are free parameters). As a sanity check, we also consider an observer with a uniformly flat speed prior (FLA), to show that in this case the algorithm can correctly infer back the absence of a prior for slow speeds (see Supplementary Material). Unlike the previous example, our analysis shows that here the reconstruction of both the prior and the characteristics of sensory noise is relatively reliable (Figure 3 and Supplementary Material), without major biases, even when we decouple the internal representation of the noise from its objective counterpart (except for underestimation of the noise lower bound s0, and of the internal noise ˜σHigh, Figure 3 c). In particular, in all cases the exponent kprior of the prior over speeds can be recovered with good accuracy. Our results provide theoretical validation, in addition to existing empirical support, for previous work that inferred internal representations in speed perception [9, 29]. 5 Conclusions We have proposed a framework for studying a priori identifiability of Bayesian models of perception. We have built a fairly general class of observer models and presented an efficient technique to explore their vast identifiability landscape. In one case study, a time interval estimation task, we have demonstrated how our framework could be used to rank candidate experimental designs depending on their ability to resolve the underlying degeneracy of parameters of interest. The obtained ranking is non-trivial: for example, it suggests that experimentally imposing a narrow loss function may be detrimental, under certain assumptions. In a second case study, we have shown instead that the inference of internal representations in speed perception, at least when cast as an estimation task in the presence of a slow-speed prior, is generally robust and in theory not prone to major degeneracies. Several modifications can be implemented to increase the scope of the psychophysical tasks covered by the framework. For example, the observer model could include a generalization to arbitrary loss spaces (see Supplementary Material), the generative model could be extended to allow multiple cues (to analyze cue-integration studies), and a variant of the model could be developed for discretechoice paradigms, such as 2AFC, whose identifiability properties are largely unknown. 8 References [1] Geisler, W. S. (2011) Contributions of ideal observer theory to vision research. Vision Res 51, 771–781. [2] Knill, D. C. & Richards, W. (1996) Perception as Bayesian inference. (Cambridge University Press). [3] Trommersh¨auser, J., Maloney, L., & Landy, M. (2008) Decision making, movement planning and statistical decision theory. Trends Cogn Sci 12, 291–297. [4] Pouget, A., Beck, J. M., Ma, W. J., & Latham, P. E. (2013) Probabilistic brains: knowns and unknowns. Nat Neurosci 16, 1170–1178. [5] Maloney, L., Mamassian, P., et al. (2009) Bayesian decision theory as a model of human visual perception: testing Bayesian transfer. Vis Neurosci 26, 147–155. [6] Vilares, I., Howard, J. D., Fernandes, H. L., Gottfried, J. A., & K¨ording, K. P. (2012) Differential representations of prior and likelihood uncertainty in the human brain. Curr Biol 22, 1641–1648. [7] K¨ording, K. P. & Wolpert, D. M. (2004) Bayesian integration in sensorimotor learning. Nature 427, 244–247. [8] Girshick, A., Landy, M., & Simoncelli, E. (2011) Cardinal rules: visual orientation perception reflects knowledge of environmental statistics. Nat Neurosci 14, 926–932. [9] Stocker, A. A. & Simoncelli, E. P. (2006) Noise characteristics and prior expectations in human visual speed perception. Nat Neurosci 9, 578–585. [10] Sanborn, A. & Griffiths, T. L. (2008) Markov chain monte carlo with people. Adv Neural Inf Process Syst 20, 1265–1272. [11] Chalk, M., Seitz, A., & Seri`es, P. (2010) Rapidly learned stimulus expectations alter perception of motion. J Vis 10, 1–18. [12] Acerbi, L., Wolpert, D. M., & Vijayakumar, S. (2012) Internal representations of temporal statistics and feedback calibrate motor-sensory interval timing. PLoS Comput Biol 8, e1002771. [13] Houlsby, N. M., Husz´ar, F., Ghassemi, M. M., Orb´an, G., Wolpert, D. M., & Lengyel, M. (2013) Cognitive tomography reveals complex, task-independent mental representations. Curr Biol 23, 2169–2175. [14] Acerbi, L., Vijayakumar, S., & Wolpert, D. M. (2014) On the origins of suboptimality in human probabilistic inference. PLoS Comput Biol 10, e1003661. [15] K¨ording, K. P. & Wolpert, D. M. (2004) The loss function of sensorimotor learning. Proc Natl Acad Sci U S A 101, 9839–9842. [16] Gekas, N., Chalk, M., Seitz, A. R., & Seri`es, P. (2013) Complexity and specificity of experimentallyinduced expectations in motion perception. J Vis 13, 1–18. [17] Jones, M. & Love, B. (2011) Bayesian Fundamentalism or Enlightenment? On the explanatory status and theoretical contributions of Bayesian models of cognition. Behav Brain Sci 34, 169–188. [18] Bowers, J. S. & Davis, C. J. (2012) Bayesian just-so stories in psychology and neuroscience. Psychol Bull 138, 389. [19] Mamassian, P. & Landy, M. S. (2010) It’s that time again. Nat Neurosci 13, 914–916. [20] Simoncelli, E. P. (2009) in The Cognitive Neurosciences, ed. M, G. (MIT Press), pp. 525–535. [21] Knill, D. C. (2003) Mixture models and the probabilistic structure of depth cues. Vision Res 43, 831–854. [22] Anderson, J. R. (1978) Arguments concerning representations for mental imagery. Psychol Rev 85, 249. [23] Navarro, D. J., Pitt, M. A., & Myung, I. J. (2004) Assessing the distinguishability of models and the informativeness of data. Cognitive Psychol 49, 47–84. [24] Jazayeri, M. & Shadlen, M. N. (2010) Temporal context calibrates interval timing. Nat Neurosci 13, 1020–1026. [25] Tassinari, H., Hudson, T., & Landy, M. (2006) Combining priors and noisy visual cues in a rapid pointing task. J Neurosci 26, 10154–10163. [26] Natarajan, R., Murray, I., Shams, L., & Zemel, R. S. (2009) Characterizing response behavior in multisensory perception with conflicting cues. Adv Neural Inf Process Syst 21, 1153–1160. [27] Carreira-Perpi˜n´an, M. A. (2000) Mode-finding for mixtures of gaussian distributions. IEEE T Pattern Anal 22, 1318–1323. [28] Spiegelhalter, D. J., Best, N. G., Carlin, B. P., & Van Der Linde, A. (2002) Bayesian measures of model complexity and fit. J R Stat Soc B 64, 583–639. [29] Hedges, J. H., Stocker, A. A., & Simoncelli, E. P. (2011) Optimal inference explains the perceptual coherence of visual motion stimuli. J Vis 11, 14, 1–16. [30] Kwon, O. S. & Knill, D. C. (2013) The brain uses adaptive internal models of scene statistics for sensorimotor estimation and planning. Proc Natl Acad Sci U S A 110, E1064–E1073. 9
|
2014
|
136
|
5,221
|
Low Rank Approximation Lower Bounds in Row-Update Streams David P. Woodruff IBM Research Almaden dpwoodru@us.ibm.com Abstract We study low-rank approximation in the streaming model in which the rows of an n × d matrix A are presented one at a time in an arbitrary order. At the end of the stream, the streaming algorithm should output a k × d matrix R so that ∥A−AR†R∥2 F ≤(1+ϵ)∥A−Ak∥2 F , where Ak is the best rank-k approximation to A. A deterministic streaming algorithm of Liberty (KDD, 2013), with an improved analysis of Ghashami and Phillips (SODA, 2014), provides such a streaming algorithm using O(dk/ϵ) words of space. A natural question is if smaller space is possible. We give an almost matching lower bound of Ω(dk/ϵ) bits of space, even for randomized algorithms which succeed only with constant probability. Our lower bound matches the upper bound of Ghashami and Phillips up to the word size, improving on a simple Ω(dk) space lower bound. 1 Introduction In the last decade many algorithms for numerical linear algebra problems have been proposed, often providing substantial gains over more traditional algorithms based on the singular value decomposition (SVD). Much of this work was influenced by the seminal work of Frieze, Kannan, and Vempala [8]. These include algorithms for matrix product, low rank approximation, regression, and many other problems. These algorithms are typically approximate and succeed with high probability. Moreover, they also generally only require one or a small number of passes over the data. When the algorithm only makes a single pass over the data and uses a small amount of memory, it is typically referred to as a streaming algorithm. The memory restriction is especially important for large-scale data sets, e.g., matrices whose elements arrive online and/or are too large to fit in main memory. These elements may be in the form of an entry or entire row seen at a time; we refer to the former as the entry-update model and the latter as the row-update model. The rowupdate model often makes sense when the rows correspond to individual entities. Typically one is interested in designing robust streaming algorithms which do not need to assume a particular order of the arriving elements for their correctness. Indeed, if data is collected online, such an assumption may be unrealistic. Muthukrishnan asked the question of determining the memory required of data stream algorithms for numerical linear algebra problems, including best rank-k approximation, matrix product, eigenvalues, determinants, and inverses [18]. This question was posed again by Sarl´os [21]. A number of exciting streaming algorithms now exist for matrix problems. Sarl´os [21] gave 2-pass algorithms for matrix product, low rank approximation, and regression, which were sharpened by Clarkson and Woodruff [5], who also proved lower bounds in the entry-update model for a number of these problems. See also work by Andoni and Nguyen for estimating eigenvalues in a stream [2], and work in [1, 4, 6] which implicitly provides algorithms for approximate matrix product. In this work we focus on the low rank approximation problem. In this problem we are given an n × d matrix A and would like to compute a matrix B of rank at most k for which ∥A −B∥F ≤ 1 (1+ϵ)∥A−Ak∥F . Here, for a matrix A, ∥A∥F denotes its Frobenius norm qPn i=1 Pd j=1 A2 i,j and Ak is the best rank-k approximation to A in this norm given by the SVD. Clarkson and Woodruff [5] show in the entry-update model, one can compute a factorization B = L·U ·R with L ∈Rn×k, U ∈Rk×k, and R ∈Rk×d, with a streaming algorithm using O(kϵ−2(n+ d/ϵ2) log(nd)) bits of space. They also show a lower bound of Ω(kϵ−1(n + d) log(nd)) bits of space. One limitation of these bounds is that they hold only when the algorithm is required to output a factorization L · U · R. In many cases n ≫d, and using memory that grows linearly with n (as the above lower bounds show is unavoidable) is prohibitive. As observed in previous work [9, 16], in downstream applications we are often only interested in an approximation to the top k principal components, i.e., the matrix R above, and so the lower bounds of Clarkson and Woodruff can be too restrictive. For example, in PCA the goal is to compute the most important directions in the row space of A. By reanalyzing an algorithm of Liberty [16], Ghashami and Phillips [9] were able to overcome this restriction in the row-update model, showing that Liberty’s algorithm is a streaming algorithm which finds a k ×d matrix R for which ∥A−AR†R∥F ≤(1+ϵ)∥A−Ak∥F using only O(dk/ϵ) words of space. Here R† is the Moore-Penrose pseudoinverse of R and R†R denotes the projection onto the row space of R. Importantly, this space bound no longer depends on n. Moreover, their algorithm is deterministic and achieves relative error. We note that Liberty’s algorithm itself is similar in spirit to earlier work on incremental PCA [3, 10, 11, 15, 19], but that work missed the idea of using a Misra-Gries heavy hitters subroutine [17] which is used to bound the additive error (which was then improved to relative error by Ghashami and Phillips). It also seems possible to obtain a streaming algorithm using O(dk(log n)/ϵ) words of space, using the coreset approach in an earlier paper by Feldman et al. [7]. This work is motivated by the following questions: Is the O(dk/ϵ) space bound tight or can one achieve an even smaller amount of space? What if one also allows randomization? In this work we answer the above questions. Our main theorem is the following. Theorem 1. Any, possibly randomized, streaming algorithm in the row-update model which outputs a k ×d matrix R and guarantees that ∥A−AR†R∥2 F ≤(1+ϵ)∥A−Ak∥2 F with probability at least 2/3, must use Ω(kd/ϵ) bits of space. Up to a factor of the word size (which is typically O(log(nd)) bits), our main theorem shows that the algorithm of Liberty is optimal. It also shows that allowing for randomization and a small probability of error does not significantly help in reducing the memory required. We note that a simple argument gives an Ω(kd) bit lower bound, see Lemma 2 below, which intuitively follows from the fact that if A itself is rank-k, then R needs to have the same rowspace of A, and specifying a random kdimensional subspace of Rd requires Ω(kd) bits. Hence, the main interest here is improving upon this lower bound to Ω(kd/ϵ) bits of space. This extra 1/ϵ factor is significant for small values of ϵ, e.g., if one wants approximations as close to machine precision as possible with a given amount of memory. The only other lower bounds for streaming algorithms for low rank approximation that we know of are due to Clarkson and Woodruff [5]. As in their work, we use the Index problem in communication complexity to establish our bounds, which is a communication game between two players Alice and Bob, holding a string x ∈{0, 1}r and an index i ∈[r] =: {1, 2, . . . , r}, respectively. In this game Alice sends a single message to Bob who should output xi with constant probability. It is known (see, e.g., [13]) that this problem requires Alice’s message to be Ω(r) bits long. If Alg is a streaming algorithm for low rank approximation, and Alice can create a matrix Ax while Bob can create a matrix Bi (depending on their respective inputs x and i), then if from the output of Alg on the concatenated matrix [Ax; Bi] Bob can output xi with constant probability, then the memory required of Alg is Ω(r) bits, since Alice’s message is the state of Alg after running it on Ax. The main technical challenges are thus in showing how to choose Ax and Bi, as well as showing how the output of Alg on [Ax; Bi] can be used to solve Index. This is where our work departs significantly from that of Clarkson and Woodruff [5]. Indeed, a major challenge is that in Theorem 1, we only require the output to be the matrix R, whereas in Clarkson and Woodruff’s work from the output one can reconstruct AR†R. This causes technical complications, since there is much less information in the output of the algorithm to use to solve the communication game. 2 The intuition behind the proof of Theorem 1 is that given a 2 × d matrix A = [1, x; 1, 0d], where x is a random unit vector, then if P = R†R is a sufficiently good projection matrix for the low rank approximation problem on A, then the second row of AP actually reveals a lot of information about x. This may be counterintuitive at first, since one may think that [1, 0d; 1, 0d] is a perfectly good low rank approximation. However, it turns out that [1, x/2; 1, x/2] is a much better low rank approximation in Frobenius norm, and even this is not optimal. Therefore, Bob, who has [1, 0d] together with the output P, can compute the second row of AP, which necessarily reveals a lot of information about x (e.g., if AP ≈[1, x/2; 1, x/2], its second row would reveal a lot of information about x), and therefore one could hope to embed an instance of the Index problem into x. Most of the technical work is about reducing the general problem to this 2 × d primitive problem. 2 Main Theorem This section is devoted to proving Theorem 1. We start with a simple lemma showing an Ω(kd) lower bound, which we will refer to. The proof of this lemma is in the full version. Lemma 2. Any streaming algorithm which, for every input A, with constant probability (over its internal randomness) succeeds in outputting a matrix R for which ∥A −AR†R∥F ≤(1 + ϵ)∥A − Ak∥F must use Ω(kd) bits of space. Returning to the proof of Theorem 1, let c > 0 be a small constant to be determined. We consider the following two player problem between Alice and Bob: Alice has a ck/ϵ × d matrix A which can be written as a block matrix [I, R], where I is the ck/ϵ × ck/ϵ identity matrix, and R is a ck/ϵ × (d −ck/ϵ) matrix in which the entries are in {−1/(d −ck/ϵ)1/2, +1/(d −ck/ϵ)1/2}. Here [I, R] means we append the columns of I to the left of the columns of R. Bob is given a set of k standard unit vectors ei1, . . . , eik, for distinct i1, . . . , ik ∈[ck/ϵ] = {1, 2, . . . , ck/ϵ}. Here we need c/ϵ > 1, but we can assume ϵ is less than a sufficiently small constant, as otherwise we would just need to prove an Ω(kd) lower bound, which is established by Lemma 2. Let B be the matrix [A; ei1, . . . , eik] obtained by stacking A on top of the vectors ei1, . . . , eik. The goal is for Bob to output a rank-k projection matrix P ∈Rd×d for which ∥B −BP∥F ≤ (1 + ϵ)∥B −Bk∥F . Denote this problem by f. We will show the randomized 1-way communication complexity of this problem R1−way 1/4 (f), in which Alice sends a single message to Bob and Bob fails with probability at most 1/4, is Ω(kd/ϵ) bits. More precisely, let µ be the following product distribution on Alice and Bob’s inputs: the entries of R are chosen independently and uniformly at random in {−1/(d − ck/ϵ)1/2, +1/(d −ck/ϵ)1/2}, while {i1, . . . , ik} is a uniformly random set among all sets of k distinct indices in [ck/ϵ]. We will show that D1−way µ,1/4 (f) = Ω(kd/ϵ), where D1−way µ,1/4 (f) denotes the minimum communication cost over all deterministic 1-way (from Alice to Bob) protocols which fail with probability at most 1/4 when the inputs are distributed according to µ. By Yao’s minimax principle (see, e.g., [14]), R1−way 1/4 (f) ≥D1−way µ,1/4 (f). We use the following two-player problem Index in order to lower bound D1−way µ,1/4 (f). In this problem Alice is given a string x ∈{0, 1}r, while Bob is given an index i ∈[r]. Alice sends a single message to Bob, who needs to output xi with probability at least 2/3. Again by Yao’s minimax principle, we have that R1−way 1/3 (Index) ≥D1−way ν,1/3 (Index), where ν is the distribution for which x and i are chosen independently and uniformly at random from their respective domains. The following is well-known. Fact 3. [13] D1−way ν,1/3 (Index) = Ω(r). Theorem 4. For c a small enough positive constant, and d ≥k/ϵ, we have D1−way µ,1/4 (f) = Ω(dk/ϵ). Proof. We will reduce from the Index problem with r = (ck/ϵ)(d−ck/ϵ). Alice, given her string x to Index, creates the ck/ϵ×d matrix A = [I, R] as follows. The matrix I is the ck/ϵ×ck/ϵ identity matrix, while the matrix R is a ck/ϵ×(d−ck/ϵ) matrix with entries in {−1/(d−ck/ϵ)1/2, +1/(d− ck/ϵ)1/2}. For an arbitrary bijection between the coordinates of x and the entries of R, Alice sets a 3 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 B2 B1 S T ck/" k ck/" d −ck/" RS RT R R Alice Bob given entry in R to −1/(d −ck/ϵ)1/2 if the corresponding coordinate of x is 0, otherwise Alice sets the given entry in R to +1/(d−ck/ϵ)1/2. In the Index problem, Bob is given an index, which under the bijection between coordinates of x and entries of R, corresponds to being given a row index i and an entry j in the i-th row of R that he needs to recover. He sets iℓ= i for a random ℓ∈[k], and chooses k −1 distinct and random indices ij ∈[ck/ϵ] \ {iℓ}, for j ∈[k] \ {ℓ}. Observe that if (x, i) ∼ν, then (R, i1, . . . , ik) ∼µ. Suppose there is a protocol in which Alice sends a single message to Bob who solves f with probability at least 3/4 under µ. We show that this can be used to solve Index with probability at least 2/3 under ν. The theorem will follow by Fact 3. Consider the matrix B which is the matrix A stacked on top of the rows ei1, . . . , eik, in that order, so that B has ck/ϵ + k rows. We proceed to lower bound ∥B −BP∥2 F in a certain way, which will allow our reduction to Index to be carried out. We need the following fact: Fact 5. ((2.4) of [20]) Let A be an m × n matrix with i.i.d. entries which are each +1/√n with probability 1/2 and −1/√n with probability 1/2, and suppose m/n < 1. Then for all t > 0, Pr[∥A∥2 > 1 + t + p m/n] ≤αe−α′nt3/2. where α, α′ > 0 are absolute constants. Here ∥A∥2 is the operator norm supx ∥Ax∥/∥x∥of A. We apply Fact 5 to the matrix R, which implies, Pr[∥R∥2 > 1 + √c + p (ck/ϵ)/(d −(ck/ϵ))] ≤αe−α′(d−(ck/ϵ))c3/4, and using that d ≥k/ϵ and c > 0 is a sufficiently small constant, this implies Pr[∥R∥2 > 1 + 3√c] ≤e−βd, (1) where β > 0 is an absolute constant (depending on c). Note that for c > 0 sufficiently small, (1 + 3√c)2 ≤1 + 7√c. Let E be the event that ∥R∥2 2 ≤1 + 7√c, which we condition on. We partition the rows of B into B1 and B2, where B1 contains those rows whose projection onto the first ck/ϵ coordinates equals ei for some i /∈{i1, . . . , ik}. Note that B1 is (ck/ϵ −k) × d and B2 is 2k × d. Here, B2 is 2k × d since it includes the rows in A indexed by i1, . . . , ik, together with the rows ei1, . . . , eik. Let us also partition the rows of R into RT and RS, so that the union of the rows in RT and in RS is equal to R, where the rows of RT are the rows of R in B1, and the rows of RS are the non-zero rows of R in B2 (note that k of the rows are non-zero and k are zero in B2 restricted to the columns in R). Lemma 6. For any unit vector u, write u = uR +uS +uT , where S = {i1, . . . , ik}, T = [ck/ϵ]\S, and R = [d] \ [ck/ϵ], and where uA for a set A is 0 on indices j /∈A. Then, conditioned on E occurring, ∥Bu∥2 ≤(1 + 7√c)(2 −∥uT ∥2 −∥uR∥2 + 2∥uS + uT ∥∥uR∥). 4 Proof. Let C be the matrix consisting of the top ck/ϵ rows of B, so that C has the form [I, R], where I is a ck/ϵ × ck/ϵ identity matrix. By construction of B, ∥Bu∥2 = ∥uS∥2 + ∥Cu∥2. Now, Cu = uS + uT + RuR, and so ∥Cu∥2 2 = ∥uS + uT ∥2 + ∥RuR∥2 + 2(us + uT )T RuR ≤ ∥uS + uT ∥2 + (1 + 7√c)∥uR∥2 + 2∥uS + uT ∥∥RuR∥ ≤ (1 + 7√c)(∥uS∥2 + ∥uT ∥2 + ∥uR∥2) + (1 + 3√c)2∥uS + uT ∥∥uR∥ ≤ (1 + 7√c)(1 + 2∥uS + uT ∥∥uR∥), and so ∥Bu∥2 ≤ (1 + 7√c)(1 + ∥uS∥2 + 2∥uS + uT ∥∥uR∥) = (1 + 7√c)(2 −∥uR∥2 −∥uT ∥2 + 2∥uS + UT ∥∥uR∥). We will also make use of the following simple but tedious fact, shown in the full version. Fact 7. For x ∈[0, 1], the function f(x) = 2x √ 1 −x2 −x2 is maximized when x = q 1/2 − √ 5/10. We define ζ to be the value of f(x) at its maximum, where ζ = 2/ √ 5 + √ 5/10 − 1/2 ≈.618. Corollary 8. Conditioned on E occurring, ∥B∥2 2 ≤(1 + 7√c)(2 + ζ). Proof. By Lemma 6, for any unit vector u, ∥Bu∥2 ≤(1 + 7√c)(2 −∥uT ∥2 −∥uR∥2 + 2∥uS + uT ∥∥uR∥). Suppose we replace the vector uS + uT with an arbitrary vector supported on coordinates in S with the same norm as uS +uT . Then the right hand side of this expression cannot increase, which means it is maximized when ∥uT ∥= 0, for which it equals (1 + 7√c)(2 −∥uR∥2 + 2 p 1 −∥uR∥2∥uR∥), and setting ∥uR∥to equal the x in Fact 7, we see that this expression is at most (1+7√c)(2+ζ). Write the projection matrix P output by the streaming algorithm as UU T , where U is d × k with orthonormal columns ui (so R†R = P in the notation of Section 1). Applying Lemma 6 and Fact 7 to each of the columns ui, we show in the full version: ∥BP∥2 F ≤ (1 + 7√c)((2 + ζ)k − k X i=1 ∥ui T ∥2). (2) Using the matrix Pythagorean theorem, we thus have, ∥B −BP∥2 F = ∥B∥2 F −∥BP∥2 F ≥ 2ck/ϵ + k −(1 + 7√c)((2 + ζ)k − k X i=1 ∥ui T ∥2) using ∥B∥2 F = 2ck/ϵ + k ≥ 2ck/ϵ + k −(1 + 7√c)(2 + ζ)k + (1 + 7√c) k X i=1 ∥ui T ∥2. (3) We now argue that ∥B −BP∥2 F cannot be too large if Alice and Bob succeed in solving f. First, we need to upper bound ∥B −Bk∥2 F . To do so, we create a matrix ˜Bk of rank-k and bound ∥B −˜Bk∥2 F . Matrix ˜Bk will be 0 on the rows in B1. We can group the rows of B2 into k pairs so that each pair has the form ei + vi, ei, where i ∈[ck/ϵ] and vi is a unit vector supported on [d] \ [ck/ϵ]. We let Yi be the optimal (in Frobenius norm) rank-1 approximation to the matrix [ei + vi; ei]. By direct computation 1 the maximum squared singular value of this matrix is 2 + ζ. Our matrix ˜Bk then consists of a single Yi for each pair in B2. Observe that ˜Bk has rank at most k and ∥B −Bk∥2 F ≤∥B −˜Bk∥2 F ≤2ck/ϵ + k −(2 + ζ)k, 1For an online SVD calculator, see http://www.bluebit.gr/matrix-calculator/ 5 Therefore, if Bob succeeds in solving f on input B, then, ∥B −BP∥2 F ≤ (1 + ϵ)(2ck/ϵ + k −(2 + ζ)k) ≤2ck/ϵ + k −(2 + ζ)k + 2ck. (4) Comparing (3) and (4), we arrive at, conditioned on E: k X i=1 ∥ui T ∥2 ≤ 1 1 + 7√c · (7√c(2 + ζ)k + 2ck) ≤c1k, (5) where c1 > 0 is a constant that can be made arbitrarily small by making c > 0 an arbitrarily small. Since P is a projector, ∥BP∥F = ∥BU∥F . Write U = ˆU + ¯U, where the vectors in ˆU are supported on T, and the vectors in ¯U are supported on [d] \ T. We have, ∥B ˆU∥2 F ≤∥B∥2 2c1k ≤(1 + 7√c)(2 + ζ)c1k ≤c2k, where the first inequality uses ∥B ˆU∥F ≤∥B∥2∥ˆU∥F and (5), the second inequality uses that event E occurs, and the third inequality holds for a constant c2 > 0 that can be made arbitrarily small by making the constant c > 0 arbitrarily small. Combining with (4) and using the triangle inequality, ∥B ¯U∥F ≥ ∥BP∥F −∥B ˆU∥F using the triangle inequality ≥ ∥BP∥F − p c2k using our bound on ∥B ˆU∥2 F = q ∥B∥2 F −∥B −BP∥2 F − p c2k by the matrix Pythagorean theorem ≥ p (2 + ζ)k −2ck − p c2k by (4) ≥ p (2 + ζ)k −c3k, (6) where c3 > 0 is a constant that can be made arbitrarily small for c > 0 an arbitrarily small constant (note that c2 > 0 also becomes arbitrarily small as c > 0 becomes arbitrarily small). Hence, ∥B ¯U∥2 F ≥(2 + ζ)k −c3k, and together with Corollary 8, that implies ∥¯U∥2 F ≥k −c4k for a constant c4 that can be made arbitrarily small by making c > 0 arbitrarily small. Our next goal is to show that ∥B2 ¯U∥2 F is almost as large as ∥B ¯U∥2 F . Consider any column ¯u of ¯U, and write it as ¯uS + ¯uR. Hence, ∥B¯u∥2 = ∥RT ¯uR∥2 + ∥B2¯u∥2 using B1¯u = RT ¯uR ≤ ∥RT ¯uR∥2 + ∥¯uS + RS ¯uR∥2 + ∥¯uS∥2 by definition of the components = ∥R¯uR∥2 + 2∥¯uS∥2 + 2¯uT SRS ¯uR using the Pythagorean theorem ≤ 1 + 7√c + ∥¯uS∥2 + 2∥¯uS∥∥RS ¯uR∥ using ∥R¯uR∥2 ≤(1 + 7√c)∥¯uR∥2 and ∥¯uR∥2 + ∥¯uS∥2 ≤1 (also using Cauchy-Schwarz to bound the other term). Suppose ∥RS ¯uR∥= τ∥¯uR∥for a value 0 ≤τ ≤1 + 7√c. Then ∥B¯u∥2 ≤1 + 7√c + ∥¯uS∥2 + 2τ∥¯uS∥ p 1 −∥¯uS∥2. We thus have, ∥B¯u∥2 ≤ 1 + 7√c + (1 −τ)∥¯uS∥2 + τ(∥¯uS∥2 + 2∥¯uS∥ p 1 −∥¯uS∥2) ≤ 1 + 7√c + (1 −τ) + τ(1 + ζ) by Fact 7 ≤ 2 + τζ + 7√c, (7) and hence, letting τ1, . . . , τk denote the corresponding values of τ for the k columns of ¯U, we have ∥B ¯U∥2 F ≤(2 + 7√c)k + ζ k X i=1 τi. (8) Comparing the square of (6) with (8), we have k X i=1 τi ≥ k −c5k, (9) 6 where c5 > 0 is a constant that can be made arbitrarily small by making c > 0 an arbitrarily small constant. Now, ∥¯U∥2 F ≥k −c4k as shown above, while since ∥Rs¯uR∥= τi∥¯uR∥if ¯uR is the i-th column of ¯U, by (9) we have ∥RS ¯UR∥2 F ≥(1 −c6)k (10) for a constant c6 that can be made arbitrarily small by making c > 0 an arbitarily small constant. Now ∥R ¯UR∥2 F ≤(1 + 7√c)k since event E occurs, and ∥R ¯UR∥2 F = ∥RT ¯UR∥2 F + ∥RS ¯UR∥2 F since the rows of R are the concatenation of rows of RS and RT , so combining with (10), we arrive at ∥RT ¯UR∥2 F ≤ c7k, (11) for a constant c7 > 0 that can be made arbitrarily small by making c > 0 arbitrarily small. Combining the square of (6) with (11), we thus have ∥B2 ¯U∥2 F = ∥B ¯U∥2 F −∥B1 ¯U∥2 F = ∥B ¯U∥2 F −∥RT ¯UR∥2 F ≥(2 + ζ)k −c3k −c7k ≥ (2 + ζ)k −c8k, (12) where the constant c8 > 0 can be made arbitrarily small by making c > 0 arbitrarily small. By the triangle inequality, ∥B2U∥F ≥∥B2 ¯U∥F −∥B2 ˆU∥F ≥((2 + ζ)k −c8k)1/2 −(c2k)1/2. (13) Hence, ∥B2 −B2P∥F = q ∥B2∥2 F −∥B2U∥2 F Matrix Pythagorean, ∥B2U∥F = ∥B2P∥F ≤ q ∥B2∥2 F −(∥B2 ¯U∥F −∥B2 ˆU∥F )2 Triangle Inequality ≤ q 3k −(((2 + ζ)k −c8k)1/2 −(c2k)1/2)2 Using (13),∥B2∥2 F = 3k,(14) (15) or equivalently, ∥B2 −B2P∥2 F ≤3k −((2 + ζ)k −c8k) −(c2k) + 2k(((2 + ζ) −c8)c2)1/2 ≤ (1 −ζ)k + c8k + 2k(((2 + ζ) −c8)c2)1/2 ≤(1 −ζ)k + c9k for a constant c9 > 0 that can be made arbitrarily small by making the constant c > 0 small enough. This intuitively says that P provides a good low rank approximation for the matrix B2. Notice that by (14), ∥B2P∥2 F = ∥B2∥2 F −∥B2 −B2P∥2 F ≥3k −(1 −ζ)k −c9k ≥(2 + ζ)k −c9k. (16) Now B2 is a 2k × d matrix and we can partition its rows into k pairs of rows of the form Zℓ= (eiℓ+Riℓ, eiℓ), for ℓ= 1, . . . , k. Here we abuse notation and think of Riℓas a d-dimensional vector, its first ck/ϵ coordinates set to 0. Each such pair of rows is a rank-2 matrix, which we abuse notation and call ZT ℓ. By direct computation2 ZT ℓhas squared maximum singular value 2 + ζ. We would like to argue that the projection of P onto the row span of most Zℓhas length very close to 1. To this end, for each Zℓconsider the orthonormal basis V T ℓ of right singular vectors for its row space (which is span(eiℓ, Riℓ)). We let vT ℓ,1, vT ℓ,2 be these two right singular vectors with corresponding singular values σ1 and σ2 (which will be the same for all ℓ, see below). We are interested in the quantity ∆= Pk ℓ=1 ∥V T ℓP∥2 F which intuitively measures how much of P gets projected onto the row spaces of the ZT ℓ. The following lemma and corollary are shown in the full version. Lemma 9. Conditioned on event E, ∆∈[k −c10k, k + c10k], where c10 > 0 is a constant that can be made arbitrarily small by making c > 0 arbitrarily small. The following corollary is shown in the full version. Corollary 10. Conditioned on event E, for a 1−√c9 + 2c10 fraction of ℓ∈[k], ∥V T ℓP∥2 F ≤1+c11, and for a 99/100 fraction of ℓ∈[k], we have ∥V T ℓP∥2 F ≥1 −c11, where c11 > 0 is a constant that can be made arbitrarily small by making the constant c > 0 arbitrarily small. 2We again used the calculator at http://www.bluebit.gr/matrix-calculator/ 7 Recall that Bob holds i = iℓfor a random ℓ∈[k]. It follows (conditioned on E) by a union bound that with probability at least 49/50, ∥V T ℓP∥2 F ∈[1 −c11, 1 + c11], which we call the event F and condition on. We also condition on event G that ∥ZT ℓP∥2 F ≥(2+ζ)−c12, for a constant c12 > 0 that can be made arbitrarily small by making c > 0 an arbitrarily small constant. Combining the first part of Corollary 10 together with (16), event G holds with probability at least 99.5/100, provided c > 0 is a sufficiently small constant. By a union bound it follows that E, F, and G occur simultaneously with probability at least 49/51. As ∥ZT ℓP∥2 F = σ2 1∥vT ℓ,1P∥2 + σ2 2∥vT ℓ,2P∥2, with σ2 1 = 2 + ζ and σ2 1 = 1 −ζ, events E, F, and G imply that ∥vT ℓ,1P∥2 ≥1 −c13, where c13 > 0 is a constant that can be made arbitrarily small by making the constant c > 0 arbitrarily small. Observe that ∥vT ℓ,1P∥2 = ⟨vℓ,1, z⟩2, where z is a unit vector in the direction of the projection of vℓ,1 onto P. By the Pythagorean theorem, ∥vℓ,1 −⟨vℓ,1, z⟩z∥2 = 1 −⟨vℓ,1, z⟩2, and so ∥vℓ,1 −⟨vℓ,1, z⟩z∥2 ≤c14, (17) for a constant c14 > 0 that can be made arbitrarily small by making c > 0 arbitrarily small. We thus have ZT ℓP = σ1⟨vℓ,1, z⟩uℓ,1zT + σ2⟨vℓ,2, w⟩uℓ,2wT , where w is a unit vector in the direction of the projection of of vℓ,2 onto P, and uℓ,1, uℓ,2 are the left singular vectors of ZT ℓ. Since F occurs, we have that |⟨vℓ,2, w⟩| ≤c11, where c11 > 0 is a constant that can be made arbitrarily small by making the constant c > 0 arbitrarily small. It follows now by (17) that ∥ZT ℓP −σ1uℓ,1vt ℓ,1∥2 F ≤c15, (18) where c15 > 0 is a constant that can be made arbitrarily small by making the constant c > 0 arbitrarily small. By direct calculation3 , uℓ,1 = −.851eiℓ−.526Riℓand vℓ,1 = −.851eiℓ−.526Riℓ. It follows that ∥ZT ℓP −(2 + ζ)[.724eiℓ+ .448Riℓ; .448eiℓ+ .277Riℓ]∥2 F ≤c15. Since eiℓis the second row of ZT ℓ, it follows that ∥eT iℓP −(2 + ζ)(.448eiℓ+ .277Riℓ)∥2 ≤c15. Observe that Bob has eiℓand P, and can therefore compute eT iℓP. Moreover, as c15 > 0 can be made arbitrarily small by making the constant c > 0 arbitrarily small, it follows that a 1 −c16 fraction of the signs of coordinates of eT iℓP, restricted to coordinates in [d] \ [ck/ϵ], must agree with those of (2 + ζ).277Riℓ, which in turn agree with those of Riℓ. Here c16 > 0 is a constant that can be made arbitrarily small by making the constant c > 0 arbitrarily small. Hence, in particular, the sign of the j-th coordinate of Riℓ, which Bob needs to output, agrees with that of the j-th coordinate of eT iℓP with probability at least 1 −c16. Call this event H. By a union bound over the occurrence of events E, F, G, and H, and the streaming algorithm succeeding (which occurs with probability 3/4), it follows that Bob succeeds in solving Index with probability at least 49/51 −1/4 −c16 > 2/3, as required. This completes the proof. 3 Conclusion We have shown an Ω(dk/ϵ) bit lower bound for streaming algorithms in the row-update model for outputting a k × d matrix R with ∥A −AR†R∥F ≤(1 + ϵ)∥A −Ak∥F , thus showing that the algorithm of [9] is optimal up to the word size. The next natural goal would be to obtain multi-pass lower bounds, which seem quite challenging. Such lower bound techniques may also be useful for showing the optimality of a constant-round O(sdk/ϵ)+(sk/ϵ)O(1) communication protocol in [12] for low-rank approximation in the distributed communication model. Acknowledgments. I would like to thank Edo Liberty and Jeff Phillips for many useful discusions and detailed comments on this work (thanks to Jeff for the figure!). I would also like to thank the XDATA program of the Defense Advanced Research Projects Agency (DARPA), administered through Air Force Research Laboratory contract FA8750-12-C0323 for supporting this work. 3Using the online calculator in earlier footnotes. 8 References [1] N. Alon, P. B. Gibbons, Y. Matias, and M. Szegedy. Tracking join and self-join sizes in limited storage. J. Comput. Syst. Sci., 64(3):719–747, 2002. [2] A. Andoni and H. L. Nguyen. Eigenvalues of a matrix in the streaming model. In SODA, pages 1729–1737, 2013. [3] M. Brand. Incremental singular value decomposition of uncertain data with missing values. In ECCV (1), pages 707–720, 2002. [4] M. Charikar, K. Chen, and M. Farach-Colton. Finding frequent items in data streams. Theor. Comput. Sci., 312(1):3–15, 2004. [5] K. L. Clarkson and D. P. Woodruff. Numerical linear algebra in the streaming model. In STOC, pages 205–214, 2009. [6] G. Cormode and S. Muthukrishnan. An improved data stream summary: the count-min sketch and its applications. J. Algorithms, 55(1):58–75, 2005. [7] D. Feldman, M. Schmidt, and C. Sohler. Turning big data into tiny data: Constant-size coresets for k-means, pca and projective clustering. In SODA, pages 1434–1453, 2013. [8] A. M. Frieze, R. Kannan, and S. Vempala. Fast monte-carlo algorithms for finding low-rank approximations. J. ACM, 51(6):1025–1041, 2004. [9] M. Ghashami and J. M. Phillips. Relative errors for deterministic low-rank matrix approximations. In SODA, pages 707–717, 2014. [10] G. H. Golub and C. F. van Loan. Matrix computations (3. ed.). Johns Hopkins University Press, 1996. [11] P. M. Hall, A. D. Marshall, and R. R. Martin. Incremental eigenanalysis for classification. In BMVC, pages 1–10, 1998. [12] R. Kannan, S. Vempala, and D. P. Woodruff. Nimble algorithms for cloud computing. CoRR, 2013. [13] I. Kremer, N. Nisan, and D. Ron. On randomized one-round communication complexity. Computational Complexity, 8(1):21–49, 1999. [14] E. Kushilevitz and N. Nisan. Communication complexity. Cambridge University Press, 1997. [15] A. Levy and M. Lindenbaum. Efficient sequential karhunen-loeve basis extraction. In ICCV, page 739, 2001. [16] E. Liberty. Simple and deterministic matrix sketching. In KDD, pages 581–588, 2013. [17] J. Misra and D. Gries. Finding repeated elements. Sci. Comput. Program., 2(2):143–152, 1982. [18] S. Muthukrishnan. Data streams: Algorithms and applications. Foundations and Trends in Theoretical Computer Science, 1(2), 2005. [19] D. A. Ross, J. Lim, R.-S. Lin, and M.-H. Yang. Incremental learning for robust visual tracking. International Journal of Computer Vision, 77(1-3):125–141, 2008. [20] M. Rudelson and R. Vershynin. Non-asymptotic theory of random matrices: extreme singular values. CoRR, 2010. [21] T. Sarl´os. Improved approximation algorithms for large matrices via random projections. In FOCS, pages 143–152, 2006. 9
|
2014
|
137
|
5,222
|
Probabilistic ODE Solvers with Runge-Kutta Means Michael Schober MPI for Intelligent Systems Tübingen, Germany mschober@tue.mpg.de David Duvenaud Department of Engineering Cambridge University dkd23@cam.ac.uk Philipp Hennig MPI for Intelligent Systems Tübingen, Germany phennig@tue.mpg.de Abstract Runge-Kutta methods are the classic family of solvers for ordinary differential equations (ODEs), and the basis for the state of the art. Like most numerical methods, they return point estimates. We construct a family of probabilistic numerical methods that instead return a Gauss-Markov process defining a probability distribution over the ODE solution. In contrast to prior work, we construct this family such that posterior means match the outputs of the Runge-Kutta family exactly, thus inheriting their proven good properties. Remaining degrees of freedom not identified by the match to Runge-Kutta are chosen such that the posterior probability measure fits the observed structure of the ODE. Our results shed light on the structure of Runge-Kutta solvers from a new direction, provide a richer, probabilistic output, have low computational cost, and raise new research questions. 1 Introduction Differential equations are a basic feature of dynamical systems. Hence, researchers in machine learning have repeatedly been interested in both the problem of inferring an ODE description from observed trajectories of a dynamical system [1, 2, 3, 4], and its dual, inferring a solution (a trajectory) for an ODE initial value problem (IVP) [5, 6, 7, 8]. Here we address the latter, classic numerical problem. Runge-Kutta (RK) methods [9, 10] are standard tools for this purpose. Over more than a century, these algorithms have matured into a very well-understood, efficient framework [11]. As recently pointed out by Hennig and Hauberg [6], since Runge-Kutta methods are linear extrapolation methods, their structure can be emulated by Gaussian process (GP) regression algorithms. Such an algorithm was envisioned by Skilling in 1991 [5], and the idea has recently attracted both theoretical [8] and practical [6, 7] interest. By returning a posterior probability measure over the solution of the ODE problem, instead of a point estimate, Gaussian process solvers extend the functionality of RK solvers in ways that are particularly interesting for machine learning. Solution candidates can be drawn from the posterior and marginalized [7]. This can allow probabilistic solvers to stop earlier, and to deal (approximately) with probabilistically uncertain inputs and problem definitions [6]. However, current GP ODE solvers do not share the good theoretical convergence properties of Runge-Kutta methods. Specifically, they do not have high polynomial order, explained below. We construct GP ODE solvers whose posterior mean functions exactly match those of the RK families of first, second and third order. This yields a probabilistic numerical method which combines the strengths of Runge-Kutta methods with the additional functionality of GP ODE solvers. It also provides a new interpretation of the classic algorithms, raising new conceptual questions. While our algorithm could be seen as a “Bayesian” version of the Runge-Kutta framework, a philosophically less loaded interpretation is that, where Runge-Kutta methods fit a single curve (a point estimate) to an IVP, our algorithm fits a probability distribution over such potential solutions, such that the mean of this distribution matches the Runge-Kutta estimate exactly. We find a family of models in the space of Gaussian process linear extrapolation methods with this property, and select a member of this family (fix the remaining degrees of freedom) through statistical estimation. 1 p = 1 p = 2 p = 3 0 0 1 0 0 α α 0 (1 − 1 2α) 1 2α 0 0 u u 0 v v −v(v−u) u(2−3u) v(v−u) u(2−3u) 0 1 − 2−3v 6u(u−v) − 2−3u 6v(v−u) 2−3v 6u(u−v) 2−3u 6v(v−u) Table 1: All consistent Runge-Kutta methods of order p ≤3 and number of stages s = p (see [11]). 2 Background An ODE Initial Value Problem (IVP) is to find a function x(t) ∶R →RN such that the ordinary differential equation ˙x = f(x,t) (where ˙x = ∂x/∂t) holds for all t ∈T = [t0,tH], and x(t0) = x0. We assume that a unique solution exists. To keep notation simple, we will treat x as scalar-valued; the multivariate extension is straightforward (it involves N separate GP models, explained in supp.). Runge-Kutta methods1 [9, 10] are carefully designed linear extrapolation methods operating on small contiguous subintervals [tn,tn + h] ⊂T of length h. Assume for the moment that n = 0. Within [t0,t0 + h], an RK method of stage s collects evaluations yi = f(ˆxi,t0 + hci) at s recursively defined input locations, i = 1,...,s, where ˆxi is constructed linearly from the previously-evaluated yj<i as ˆxi = x0 + h i−1 ∑ j=1 wijyj, (1) then returns a single prediction for the solution of the IVP at t0 + h, as ˆx(t0 + h) = x0 + h∑s i=1 biyi (modern variants can also construct non-probabilistic error estimates, e.g. by combining the same observations into two different RK predictions [12]). In compact form, yi = f ⎛ ⎝x0 + h i−1 ∑ j=1 wijyj, t0 + hci ⎞ ⎠, i = 1,...,s, ˆx(t0 + h) = x0 + h s ∑ i=1 biyi. (2) ˆx(t0 + h) is then taken as the initial value for t1 = t0 + h and the process is repeated until tn + h ≥tH. A Runge-Kutta method is thus identified by a lower-triangular matrix W = {wij}, and vectors c = [c1,...,cs], b = [b1,...,bs], often presented compactly in a Butcher tableau [13]: c1 0 c2 w21 0 c3 w31 w32 0 ⋮ ⋮ ⋮ ⋱ ⋱ cs ws1 ws2 ⋯ ws,s−1 0 b1 b2 ⋯ bs−1 bs As Hennig and Hauberg [6] recently pointed out, the linear structure of the extrapolation steps in Runge-Kutta methods means that their algorithmic structure, the Butcher tableau, can be constructed naturally from a Gaussian process regression method over x(t), where the yi are treated as “observations” of ˙x(t0 + hci) and the ˆxi are subsequent posterior estimates (more below). However, proper RK methods have structure that is not generally reproduced by an arbitrary Gaussian process prior on x: Their distinguishing property is that the approximation ˆx and the Taylor series of the true solution coincide at t0 + h up to the p-th term—their numerical error is bounded by ∣∣x(t0 +h)−ˆx(t0 +h)∣∣≤Khp+1 for some constant K (higher orders are better, because h is assumed to be small). The method is then said to be of order p [11]. A method is consistent, if it is of order p = s. This is only possible for p < 5 [14, 15]. There are no methods of order p > s. High order is a strong desideratum for ODE solvers, not currently offered by Gaussian process extrapolators. Table 1 lists all consistent methods of order p ≤3 where s = p. For s = 1, only Euler’s method (linear extrapolation) is consistent. For s = 2, there exists a family of methods of order p = 2, parametrized 1In this work, we only address so-called explicit RK methods (shortened to “Runge-Kutta methods” for simplicity). These are the base case of the extensive theory of RK methods. Many generalizations can be found in [11]. Extending the probabilistic framework discussed here to the wider Runge-Kutta class is not trivial. 2 by a single parameter α ∈(0,1], where α = 1/2 and α = 1 mark the midpoint rule and Heun’s method, respectively. For s = 3, third order methods are parameterized by two variables u,v ∈(0,1]. Gaussian processes (GPs) are well-known in the NIPS community, so we omit an introduction. We will use the standard notation µ ∶R →R for the mean function, and k ∶R × R →R for the covariance function; kUV for Gram matrices of kernel values k(ui,vj), and analogous for the mean function: µT = [µ(t1),...,µ(tN)]. A GP prior p(x) = GP(x;µ,k) and observations (T,Y ) = {(t1,y1),...,(ts,ys)} having likelihood N(Y ;xT ,Λ) give rise to a posterior GPs(x;µs,ks) with µs t = µt + ktT (kT T + Λ)−1(Y −µT ) and ks uv = kuv −kuT (kT T + Λ)−1kT v. (3) GPs are closed under linear maps. In particular, the joint distribution over x and its derivative is p[(x ˙x)] = GP [(x ˙x);( µ µ∂),( k k∂ k ∂ k ∂ ∂)] (4) with µ∂= ∂µ(t) ∂t , k∂= ∂k(t,t′) ∂t′ , k ∂ = ∂k(t,t′) ∂t , k ∂ ∂= ∂2k(t,t′) ∂t∂t′ . (5) A recursive algorithm analogous to RK methods can be constructed [5, 6] by setting the prior mean to the constant µ(t) = x0, then recursively estimating ˆxi in some form from the current posterior over x. The choice in [6] is to set ˆxi = µi(t0 + hci). “Observations” yi = f(ˆxi,t0 + hci) are then incorporated with likelihood p(yi ∣x) = N(yi; ˙x(t0 + hci),Λ). This recursively gives estimates ˆx(t0 + hci) = x0 + i−1 ∑ j=1 i−1 ∑ ℓ=1 k∂(t0 + hci,t0 + hcℓ)( K ∂ ∂+ Λ)−1 ℓj yj = x0 + h∑ j wijyj, (6) with K ∂ ∂ij = k ∂ ∂(t0 + hci,t0 + hcj). The final prediction is the posterior mean at this point: ˆx(t0 + h) = x0 + s ∑ i=1 s ∑ j=1 k∂(t0 + h,t0 + hcj)( K ∂ ∂+ Λ)−1 ji yi = x0 + h s ∑ i biyi. (7) 3 Results The described GP ODE estimate shares the algorithmic structure of RK methods (i.e. they both use weighted sums of the constructed estimates to extrapolate). However, in RK methods, weights and evaluation positions are found by careful analysis of the Taylor series of f, such that low-order terms cancel. In GP ODE solvers they arise, perhaps more naturally but also with less structure, by the choice of the ci and the kernel. In previous work [6, 7], both were chosen ad hoc, with no guarantee of convergence order. In fact, as is shown in the supplements, the choices in these two works—square-exponential kernel with finite length-scale, evaluations at the predictive mean—do not even give the first order convergence of Euler’s method. Below we present three specific regression models based on integrated Wiener covariance functions and specific evaluation points. Each model is the improper limit of a Gauss-Markov process, such that the posterior distribution after s evaluations is a proper Gaussian process, and the posterior mean function at t0 + h coincides exactly with the Runge-Kutta estimate. We will call these methods, which give a probabilistic interpretation to RK methods and extend them to return probability distributions, Gauss-Markov-Runge-Kutta (GMRK) methods, because they are based on Gauss-Markov priors and yield Runge-Kutta predictions. 3.1 Design choices and desiderata for a probabilistic ODE solver Although we are not the first to attempt constructing an ODE solver that returns a probability distribution, open questions still remain about what, exactly, the properties of such a probabilistic numerical method should be. Chkrebtii et al. [8] previously made the case that Gaussian measures are uniquely suited because solution spaces of ODEs are Banach spaces, and provided results on consistency. Above, we added the desideratum for the posterior mean to have high order, i.e. to reproduce the Runge-Kutta estimate. Below, three additional issues become apparent: Motivation of evaluation points Both Skilling [5] and Hennig and Hauberg [6] propose to put the “nodes” ˆx(t0 + hci) at the current posterior mean of the belief. We will find that this can be made 3 x 1st order (Euler) 2nd order (midpoint) 3rd order (u = 1/4, v = 3/4) t0 t0 + h 0 t x −µ(t) t0 t0 + h t t0 t0 + h t Figure 1: Top: Conceptual sketches. Prior mean in gray. Initial value at t0 = 1 (filled blue). Gradient evaluations (empty blue circles, lines). Posterior (means) after first, second and third gradient observation in orange, green and red respectively. Samples from the final posterior as dashed lines. Since, for the second and third-order methods, only the final prediction is a proper probability distribution, for intermediate steps only mean functions are shown. True solution to (linear) ODE in black. Bottom: For better visibility, same data as above, minus final posterior mean. consistent with the order requirement for the RK methods of first and second order. However, our third-order methods will be forced to use a node ˆx(t0 + hci) that, albeit lying along a function w(t) in the reproducing kernel Hilbert space associated with the posterior GP covariance function, is not the mean function itself. It will remain open whether the algorithm can be amended to remove this blemish. However, as the nodes do not enter the GP regression formulation, their choice does not directly affect the probabilistic interpretation. Extension beyond the first extrapolation interval Importantly, the Runge-Kutta argument for convergence order only holds strictly for the first extrapolation interval [t0,t0 + h]. From the second interval onward, the RK step solves an estimated IVP, and begins to accumulate a global estimation error not bounded by the convergence order (an effect termed “Lady Windermere’s fan” by Wanner [16]). Should a probabilistic solver aim to faithfully reproduce this imperfect chain of RK solvers, or rather try to capture the accumulating global error? We investigate both options below. Calibration of uncertainty A question easily posed but hard to answer is what it means for the probability distribution returned by a probabilistic method to be well calibrated. For our Gaussian case, requiring RK order in the posterior mean determines all but one degree of freedom of an answer. The remaining parameter is the output scale of the kernel, the “error bar” of the estimate. We offer a relatively simple statistical argument below that fits this parameter based on observed values of f. We can now proceed to the main results. In the following, we consider extrapolation algorithms based on Gaussian process priors with vanishing prior mean function, noise-free observation model (Λ = 0 in Eq. (3)). All covariance functions in question are integrals over the kernel k0(˜t, ˜t′) = σ2 min(˜t −τ, ˜t′ −τ) (parameterized by scale σ2 > 0 and off-set τ ∈R; valid on the domain ˜t, ˜t′ > τ), the covariance of the Wiener process [17]. Such integrated Wiener processes are Gauss-Markov processes, of increasing order, so inference in these methods can be performed by filtering, at linear cost [18]. We will use the shorthands t = ˜t −τ and t′ = ˜t′ −τ for inputs shifted by τ. 3.2 Gauss-Markov methods matching Euler’s method Theorem 1. The once-integrated Wiener process prior p(x) = GP(x;0,k1) with k1(t,t′) = ∬ ˜t,˜t′ τ k0(u,v)dudv = σ2 (min3(t,t′) 3 + ∣t −t′∣min2(t,t′) 2 ) (8) choosing evaluation nodes at the posterior mean gives rise to Euler’s method. 4 Proof. We show that the corresponding Butcher tableau from Table 1 holds. After “observing” the initial value, the second observation y1, constructed by evaluating f at the posterior mean at t0, is y1 = f (µ∣x0(t0),t0) = f (k(t0,t0) k(t0,t0)x0,t0) = f(x0,t0), (9) directly from the definitions. The posterior mean after incorporating y1 is µ∣x0,y1(t0 + h) = [k(t0 + h,t0) k∂(t0 + h,t0)][ k(t0,t0) k∂(t0,t0) k∂(t0,t0) k ∂ ∂(t0,t0)] −1 (x0 y1) = x0 + hy1. (10) An explicit linear algebraic derivation is available in the supplements. 3.3 Gauss-Markov methods matching all Runge-Kutta methods of second order Extending to second order is not as straightforward as integrating the Wiener process a second time. The theorem below shows that this only works after moving the onset −τ of the process towards infinity. Fortunately, this limit still leads to a proper posterior probability distribution. Theorem 2. Consider the twice-integrated Wiener process prior p(x) = GP(x;0,k2) with k2(t,t′) = ∬ ˜t,˜t′ τ k1(u,v)dudv = σ2 (min5(t,t′) 20 + ∣t −t′∣ 12 ((t + t′)min3(t,t′) −min4(t,t′) 2 )). (11) Choosing evaluation nodes at the posterior mean gives rise to the RK family of second order methods in the limit of τ →∞. (The twice-integrated Wiener process is a proper Gauss-Markov process for all finite values of τ and ˜t, ˜t′ > 0. In the limit of τ →∞, it turns into an improper prior of infinite local variance.) Proof. The proof is analogous to the previous one. We need to show all equations given by the Butcher tableau and choice of parameters hold for any choice of α. The constraint for y1 holds trivially as in Eq. (9). Because y2 = f(x0 + hαy1,t0 + hα), we need to show µ∣x0,y1(t0 + hα) = x0 + hαy1. Therefore, let α ∈(0,1] arbitrary but fixed: µ∣x0,y1(t0 + hα) = [k(t0 + h,t0) k∂(t0 + h,t0)][ k(t0,t0) k∂(t0,t0) k ∂(t0,t0) k ∂ ∂(t0,t0)] −1 (x0 y1) = [ t3 0(10(hα)2+15hαt0+6t2 0) 120 t2 0(6(hα)2+8hαt0+3t2 0) 24 ][ t5 0/20 t4 0/8 t4 0/8 t3 0/3] −1 (x0 y1) = [1 −10(hα)2 3t2 0 hα + 2(hα)2 t0 ](x0 y1) ÐÐÐ→ τ→∞x0 + hαy1 (12) As t0 = ˜t0 −τ, the mismatched terms vanish for τ →∞. Finally, extending the vector and matrix with one more entry, a lengthy computation shows that limτ→∞µ∣x0,y1,y2(t0 + h) = x0 + h(1 −1/2α)y1 + h/2αy2 also holds, analogous to Eq. (10). Omitted details can be found in the supplements. They also include the final-step posterior covariance. Its finite values mean that this posterior indeed defines a proper GP. 3.4 A Gauss-Markov method matching Runge-Kutta methods of third order Moving from second to third order, additionally to the limit towards an improper prior, also requires a departure from the policy of placing extrapolation nodes at the posterior mean. Theorem 3. Consider the thrice-integrated Wiener process prior p(x) = GP(x;0,k3) with k3(t,t′) = ∬ ˜t,˜t′ τ k2(u,v)dudv = σ2 (min7(t,t′) 252 + ∣t −t′∣min4(t,t′) 720 (5max2(t,t′) + 2tt′ + 3min2(t,t′))). (13) 5 Evaluating twice at the posterior mean and a third time at a specific element of the posterior covariance functions’ RKHS gives rise to the entire family of RK methods of third order, in the limit of τ →∞. Proof. The proof progresses entirely analogously as in Theorems 1 and 2, with one exception for the term where the mean does not match the RK weights exactly. This is the case for y3 = x0 + h[(v −v(v−u)/u(2−3u))y1 + v(v−u)/u(2−3u)y2] (see Table 1). The weights of Y which give the posterior mean at this point are given by kK−1 (cf. Eq. (3), which, in the limit, has value (see supp.): lim τ→∞[k(t0 + hv,t0) k∂(t0 + hv,t0) k∂(t0 + hv,t0 + hu)]K−1 =[1 h(v −v2 2u) h v2 2u] =[1 h(v −v(v−u) u(2−3u)−v(3v−2) 2(3u−2)) h( v(v−u) u(2−3u)+ v(3v−2) 2(3u−2))] =[1 h(v −v(v−u) u(2−3u)) h( v(v−u) u(2−3u))] + [0 −h v(3v−2) 2(3u−2) h v(3v−2) 2(3u−2)] (14) This means that the final RK evaluation node does not lie at the posterior mean of the regressor. However, it can be produced by adding a correction term w(v) = µ(v) + ε(v)(y2 −y1) where ε(v) = v 2 3v −2 3u −2 (15) is a second-order polynomial in v. Since k is of third or higher order in v (depending on the value of u), w can be written as an element of the thrice integrated Wiener process’ RKHS [19, §6.1]. Importantly, the final extrapolation weights b under the limit of the Wiener process prior again match the RK weights exactly, regardless of how y3 is constructed. We note in passing that Eq. (15) vanishes for v = 2/3. For this choice, the RK observation y2 is generated exactly at the posterior mean of the Gaussian process. Intriguingly, this is also the value for α for which the posterior variance at t0 + h is minimized. 3.5 Choosing the output scale The above theorems have shown that the first three families of Runge-Kutta methods can be constructed from repeatedly integrated Wiener process priors, giving a strong argument for the use of such priors in probabilistic numerical methods. However, requiring this match to a specific Runge-Kutta family in itself does not yet uniquely identify a particular kernel to be used: The posterior mean of a Gaussian process arising from noise-free observations is independent of the output scale (in our notation: σ2) of the covariance function (this can also be seen by inspecting Eq. (3)). Thus, the parameter σ2 can be chosen independent of the other parts of the algorithm, without breaking the match to Runge-Kutta. Several algorithms using the observed values of f to choose σ2 without major cost overhead have been proposed in the regression community before [e.g. 20, 21]. For this particular model an even more basic rule is possible: A simple derivation shows that, in all three families of methods defined above, the posterior belief over ∂sx/∂ts is a Wiener process, and the posterior mean function over the s-th derivative after all s steps is a constant function. The Gaussian model implies that the expected distance of this function from the (zero) prior mean should be the marginal standard deviation √ σ2. We choose σ2 such that this property is met, by setting σ2 = [∂sµs(t)/∂ts]2. Figure 1 shows conceptual sketches highlighting the structure of GMRK methods. Interestingly, in both the second- and third-order families, our proposed priors are improper, so the solver can not actually return a probability distribution until after the observation of all s gradients in the RK step. Some observations We close the main results by highlighting some non-obvious aspects. First, it is intriguing that higher convergence order results from repeated integration of Wiener processes. This repeated integration simultaneously adds to and weakens certain prior assumptions in the implicit (improper) Wiener prior: s-times integrated Wiener processes have marginal variance ks(t,t) ∝t2s+1. Since many ODEs (e.g. linear ones) have solution paths of values O(exp(t)), it is tempting to wonder whether there exists a limit process of “infinitely-often integrated” Wiener processes giving natural coverage to this domain (the results on a linear ODE in Figure 1 show how the polynomial posteriors cannot cover the exponentially diverging true solution). In this context, 6 Naïve chaining Smoothing Probabilistic continuation 0.2 0.4 0.6 0.8 1 x t0 + ⋯ h 2h 3h 4h 0 2 4 ⋅10−2 t x(t) −f(t) t0 + ⋯ h 2h 3h 4h ⋅10−2 t t0 + ⋯ h 2h 3h 4h ⋅10−2 t Figure 2: Options for the continuation of GMRK methods after the first extrapolation step (red). All plots use the midpoint method and h = 1. Posterior after two steps (same for all three options) in red (mean, ±2 standard deviations). Extrapolation after 2, 3, 4 steps (gray vertical lines) in green. Final probabilistic prediction as green shading. True solution to (linear) ODE in black. Observations of x and ˙x marked by solid and empty blue circles, respectively. Bottom row shows the same data, plotted relative to true solution, at higher y-resolution. it is also noteworthy that s-times integrated Wiener priors incorporate the lower-order results for s′ < s, so “highly-integrated” Wiener kernels can be used to match finite-order Runge-Kutta methods. Simultaneously, though, sample paths from an s-times integrated Wiener process are almost surely s-times differentiable. So it seems likely that achieving good performance with a Gauss-MarkovRunge-Kutta solver requires trading off the good marginal variance coverage of high-order Markov models (i.e. repeatedly integrated Wiener processes) against modelling non-smooth solution paths with lower degrees of integration. We leave this very interesting question for future work. 4 Experiments Since Runge-Kutta methods have been extensively studied for over a century [11], it is not necessary to evaluate their estimation performance again. Instead, we focus on an open conceptual question for the further development of probabilistic Runge-Kutta methods: If we accept high convergence order as a prerequisite to choose a probabilistic model, how should probabilistic ODE solvers continue after the first s steps? Purely from an inference perspective, it seems unnatural to introduce new evaluations of x (as opposed to ˙x) at t0 + nh for n = 1,2,.... Also, with the exception of the Euler case, the posterior covariance after s evaluations is of such a form that its renewed use in the next interval will not give Runge-Kutta estimates. Three options suggest themselves: Naïve Chaining One could simply re-start the algorithm several times as if the previous step had created a novel IVP. This amounts to the classic RK setup. However, it does not produce a joint “global” posterior probability distribution (Figure 2, left column). Smoothing An ad-hoc remedy is to run the algorithm in the “Naïve chaining” mode above, producing N × s gradient observations and N function evaluations, but then compute a joint posterior distribution by using the first s gradient observations and 1 function evaluation as described in Section 3, then using the remaining s(N −1) gradients and (N −1) function values as in standard GP inference. The appeal of this approach is that it produces a GP posterior whose mean goes through the RK points (Figure 2, center column). But from a probabilistic standpoint it seems contrived. In particular, it produces a very confident posterior covariance, which does not capture global error. 7 t0 + ⋯ h 2h 3h 4h −1 0 1 2 ⋅10−2 t µ(t) −f(t) 2nd-order GMRK GP with SE kernel Figure 3: Comparison of a 2nd order GMRK method and the method from [6]. Shown is error and posterior uncertainty of GMRK (green) and SE kernel (orange). Dashed lines are +2 standard deviations. The SE method shown used the best out of several evaluated parameter choices. Continuing after s evaluations Perhaps most natural from the probabilistic viewpoint is to break with the RK framework after the first RK step, and simply continue to collect gradient observations— either at RK locations, or anywhere else. The strength of this choice is that it produces a continuously growing marginal variance (Figure 2, right). One may perceive the departure from the established RK paradigm as problematic. However, we note again that the core theoretical argument for RK methods is only strictly valid in the first step, the argument for iterative continuation is a lot weaker. Figure 2 shows exemplary results for these three approaches on the (stiff) linear IVP ˙x(t) = −1/2x(t), x(0) = 1. Naïve chaining does not lead to a globally consistent probability distribution. Smoothing does give this global distribution, but the “observations” of function values create unnatural nodes of certainty in the posterior. The probabilistically most appealing mode of continuing inference directly offers a naturally increasing estimate of global error. At least for this simple test case, it also happens to work better in practice (note good match to ground truth in the plots). We have found similar results for other test cases, notably also for non-stiff linear differential equations. But of course, probabilistic continuation breaks with at least the traditional mode of operation for Runge-Kutta methods, so a closer theoretical evaluation is necessary, which we are planning for a follow-up publication. Comparison to Square-Exponential kernel Since all theoretical guarantees are given in forms of upper bounds for the RK methods, the application of different GP models might still be favorable in practice. We compared the continuation method from Fig. 2 (right column) to the ad-hoc choice of a square-exponential (SE) kernel model, which was used by Hennig and Hauberg [6] (Fig. 3). For this test case, the GMRK method surpasses the SE-kernel algorithm both in accuracy and calibration: its mean is closer to the true solution than the SE method, and its error bar covers the true solution, while the SE method is over-confident. This advantage in calibration is likely due to the more natural choice of the output scale σ2 in the GMRK framework. 5 Conclusions We derived an interpretation of Runge-Kutta methods in terms of the limit of Gaussian process regression with integrated Wiener covariance functions, and a structured but nontrivial extrapolation model. The result is a class of probabilistic numerical methods returning Gaussian process posterior distributions whose means can match Runge-Kutta estimates exactly. This class of methods has practical value, particularly to machine learning, where previous work has shown that the probability distribution returned by GP ODE solvers adds important functionality over those of point estimators. But these results also raise pressing open questions about probabilistic ODE solvers. This includes the question of how the GP interpretation of RK methods can be extended beyond the 3rd order, and how ODE solvers should proceed after the first stage of evaluations. Acknowledgments The authors are grateful to Simo Särkkä for a helpful discussion. 8 References [1] T. Graepel. “Solving noisy linear operator equations by Gaussian processes: Application to ordinary and partial differential equations”. In: International Conference on Machine Learning (ICML). 2003. [2] B. Calderhead, M. Girolami, and N. Lawrence. “Accelerating Bayesian inference over nonlinear differential equations with Gaussian processes.” In: Advances in Neural Information Processing Systems (NIPS). 2008. [3] F. Dondelinger et al. “ODE parameter inference using adaptive gradient matching with Gaussian processes”. In: Artificial Intelligence and Statistics (AISTATS). 2013, pp. 216–228. [4] Y. Wang and D. Barber. “Gaussian Processes for Bayesian Estimation in Ordinary Differential Equations”. In: International Conference on Machine Learning (ICML). 2014. [5] J. Skilling. “Bayesian solution of ordinary differential equations”. In: Maximum Entropy and Bayesian Methods, Seattle (1991). [6] P. Hennig and S. Hauberg. “Probabilistic Solutions to Differential Equations and their Application to Riemannian Statistics”. In: Proc. of the 17th int. Conf. on Artificial Intelligence and Statistics (AISTATS). Vol. 33. JMLR, W&CP, 2014. [7] M. Schober et al. “Probabilistic shortest path tractography in DTI using Gaussian Process ODE solvers”. In: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2014. Springer, 2014. [8] O. Chkrebtii et al. “Bayesian Uncertainty Quantification for Differential Equations”. In: arXiv prePrint 1306.2365 (2013). [9] C. Runge. “Über die numerische Auflösung von Differentialgleichungen”. In: Mathematische Annalen 46 (1895), pp. 167–178. [10] W. Kutta. “Beitrag zur näherungsweisen Integration totaler Differentialgleichungen”. In: Zeitschrift für Mathematik und Physik 46 (1901), pp. 435–453. [11] E. Hairer, S. Nørsett, and G. Wanner. Solving Ordinary Differential Equations I – Nonstiff Problems. Springer, 1987. [12] J. R. Dormand and P. J. Prince. “A family of embedded Runge-Kutta formulae”. In: Journal of computational and applied mathematics 6.1 (1980), pp. 19–26. [13] J. Butcher. “Coefficients for the study of Runge-Kutta integration processes”. In: Journal of the Australian Mathematical Society 3.02 (1963), pp. 185–201. [14] F. Ceschino and J. Kuntzmann. Problèmes différentiels de conditions initiales (méthodes numériques). Dunod Paris, 1963. [15] E. B. Shanks. “Solutions of Differential Equations by Evaluations of Functions”. In: Mathematics of Computation 20.93 (1966), pp. 21–38. [16] E. Hairer and C. Lubich. “Numerical solution of ordinary differential equations”. In: The Princeton Companion to Applied Mathematics, ed. by N. Higham. PUP, 2012. [17] N. Wiener. “Extrapolation, interpolation, and smoothing of stationary time series with engineering applications”. In: Bull. Amer. Math. Soc. 56 (1950), pp. 378–381. [18] S. Särkkä. Bayesian filtering and smoothing. Cambridge University Press, 2013. [19] C. Rasmussen and C. Williams. Gaussian Processes for Machine Learning. MIT, 2006. [20] R. Shumway and D. Stoffer. “An approach to time series smoothing and forecasting using the EM algorithm”. In: Journal of time series analysis 3.4 (1982), pp. 253–264. [21] Z. Ghahramani and G. Hinton. Parameter estimation for linear dynamical systems. Tech. rep. Technical Report CRG-TR-96-2, University of Totronto, Dept. of Computer Science, 1996. 9
|
2014
|
138
|
5,223
|
Learning a Concept Hierarchy from Multi-labeled Documents Viet-An Nguyen1∗, Jordan Boyd-Graber2, Philip Resnik1,3,4, Jonathan Chang5 1Computer Science, 3Linguistics, 4UMIACS Univ. of Maryland, College Park, MD vietan@cs.umd.edu resnik@umd.edu 2Computer Science Univ. of Colorado, Boulder, CO Jordan.Boyd.Graber @colorado.edu 5Facebook Menlo Park, CA jonchang@fb.com Abstract While topic models can discover patterns of word usage in large corpora, it is difficult to meld this unsupervised structure with noisy, human-provided labels, especially when the label space is large. In this paper, we present a model—Label to Hierarchy (L2H)—that can induce a hierarchy of user-generated labels and the topics associated with those labels from a set of multi-labeled documents. The model is robust enough to account for missing labels from untrained, disparate annotators and provide an interpretable summary of an otherwise unwieldy label set. We show empirically the effectiveness of L2H in predicting held-out words and labels for unseen documents. 1 Understanding Large Text Corpora through Label Annotations Probabilistic topic models [4] discover the thematic structure of documents from news, blogs, and web pages. Typical unsupervised topic models such as latent Dirichlet allocation [7, LDA] uncover topics from unannotated documents. In many settings, however, documents are also associated with additional data, which provide a foundation for joint models of text with continuous response variables [6, 48, 27], categorical labels [37, 18, 46, 26] or link structure [9]. This paper focuses on additional information in the form of multi-labeled data, where each document is tagged with a set of labels. These data are ubiquitous. Web pages are tagged with multiple directories,1 books are labeled with different categories or political speeches are annotated with multiple issues.2 Previous topic models on multi-labeled data focus on a small set of relatively independent labels [25, 36, 46]. Unfortunately, in many real-world examples, the number of labels— from hundreds to thousands—is incompatible with the independence assumptions of these models. In this paper, we capture the dependence among the labels using a learned tree-structured hierarchy. Our proposed model, L2H—Label to Hierarchy—learns from label co-occurrence and word usage to discover a hierarchy of topics associated with user-generated labels. We show empirically that L2H can improve over relevant baselines in predicting words or missing labels in two prediction tasks. L2H is designed to explicitly capture the relationships among labels to discover a highly interpretable hierarchy from multi-labeled data. This interpretable hierarchy helps improve prediction performance and also provides an effective way to search, browse and understand multi-labeled data [17, 10, 8, 12]. ∗Part of this work was done while the first author interned at Facebook. 1Open Directory Project (http://www.dmoz.org/) 2Policy Agenda Codebook (http://policyagendas.org/) 1 2 L2H: Capturing Label Dependencies using a Tree-structured Hierarchy Discovering a topical hierarchy from text has been the focus of much topic modeling research. One popular approach is to learn an unsupervised hierarchy of topics. For example, hLDA [5] learns an unbounded tree-structured hierarchy of topics from unannotated documents. One drawback of hLDA is that documents only are associated with a single path in the topic tree. Recent work relaxing this restriction include TSSB [1], nHDP [30], nCRF [2] and SHLDA [27]. Going beyond tree structure, PAM [20] captures the topic hierarchy using a pre-defined DAG, inspiring more flexible extensions [19, 24]. However, since only unannotated text is used to infer the hierarchical topics, it usually requires an additional step of topic labeling to make them interpretable. This difficulty motivates work leveraging existing taxonomies such as HSLDA [31] and hLLDA [32]. A second active area of research is constructing a taxonomy from multi-labeled data. For example, Heymann and Garcia-Molina [17] extract a tag hierarchy using the tag network centrality; similar work has been applied to protein hierarchies [42]. Hierarchies of concepts have come from seeded ontologies [39], crowdsourcing [29], and user-specified relations [33]. More sophisticated approaches build domain-specific keyword taxonomies with adapting Bayesian Rose Trees [21]. These approaches, however, concentrate on the tags, ignoring the content the tags describe. In this paper, we combine ideas from these two lines of research and introduce L2H, a hierarchical topic model that discovers a tree-structured hierarchy of concepts from a collection of multi-labeled documents. L2H takes as input a set of D documents {wd}, each tagged with a set of labels ld. The label set L contains K unique, unstructured labels and the word vocabulary size is V . To learn an interpretable taxonomy, L2H associates each label—a user-generated word/phrase—with a topic—a multinomial distribution over the vocabulary—to form a concept, and infers a tree-structured hierarchy to capture the relationships among concepts. Figure 1 shows the plate diagram for L2H, together with its generative process. , ,
, ℒ , ℒ , 1. Create label graph G and draw a uniform spanning tree T from G (§ 2.1) 2. For each node k ∈[1, K] in T (a) If k is the root, draw background topic φk ∼Dir(βu) (b) Otherwise, draw topic φk ∼Dir(βφσ(k)) where σ(k) is node k’s parent. 3. For each document d ∈[1, D] having labels ld (a) Define L0 d and L1 d using T and ld (cf. § 2.2) (b) Draw θ0 d ∼Dir(L0 d × α) and θ1 d ∼Dir(L1 d × α) (c) Draw a stochastic switching variable πd ∼Beta(γ0, γ1) (d) For each token n ∈[1, Nd] i. Draw set indicator xd,n ∼Bern(πd) ii. Draw topic indicator zd,n ∼Mult(θ xd,n d ) iii. Draw word wd,n ∼Mult(φzd,n) Figure 1: Generative process and the plate diagram notation of L2H. 2.1 Generating a labeled topic hierarchy We assume an underlying directed graph G = (E, V) in which each node is a concept consisting of (1) a label—observable user-generated input, and (2) a topic—latent multinomial distribution over words.3 The prior weight of a directed edge from node i to node k is the fraction of documents tagged with label k which are also tagged with label i: ti,k = Di,k/Dj. We also assume an additional Background node. Edges to the Background node have prior zero weight and edges from the Background node to node i have prior weight troot,i = Di/maxk Dk. Here, Di is the number of documents tagged with label i, and Di,k is the number of documents tagged with both labels i and k. The tree T is a spanning tree generated from G. The probability of a tree given the graph G is thus the product of all its edge prior weights p(T | G) = Q e∈E te. To capture the intuition that child nodes in the hierarchy specialize the concepts of their parents, we model the topic φk at each node 3In this paper, we use node when emphasizing the structure discovered by the model. Each node corresponds to a concept which consists of a label and a topic. 2 k using a Dirichlet distribution whose mean is centered at the topic of node k’s parent σ(k), i.e., φk ∼Dir(βφσ(k)). The topic at the root node is drawn from a symmetric Dirichlet φroot ∼Dir(βu), where u is a uniform distribution over the vocabulary [1, 2]. This is similar to the idea of “backoff” in language models where more specific contexts inherit the ideas expressed in more general contexts; i.e., if we talk about “pedagogy” in education, there’s a high likelihood we’ll also talk about it in university education [22, 41]. 2.2 Generating documents As in LDA, each word in a document is generated by one of the latent topics. L2H, however, also uses the labels and topic hierarchy to restrict the topics a document uses. The document’s label set ld identifies which nodes are more likely to be used. Restricting tokens of a document in this way—to be generated only from a subset of the topics depending the document’s labels—creates specific, focused, labeled topics [36, Labeled LDA]. Unfortunately, ld is unlikely to be an exhaustive enumeration: particularly when the label set is large, users often forget or overlook relevant labels. We therefore depend on the learned topology of the hierarchy to fill in the gaps of what users forget by expanding ld into a broader set, L1 d, which is the union of nodes on the paths from the root node to any of the document’s label nodes. We call this the document’s candidate set. The candidate set also induces a complementary set L0 d ≡L \ L1 d (illustrated in Figure 2). Previous approaches such as LPAM [3] and Tree labeled LDA [40] also leverage the label hierarchy to expand the original label set. However, these previous models require that the label hierarchy is given rather than inferred as in our L2H. 0 1 4 5 6 2 3 Figure 2: Illustration of the candidate label set: given a document d having labels ld = {2, 4} (double-circled nodes), the candidate label set of d consists of nodes on all the paths from the root node to node 2 and node 4. L1 d = {0, 1, 2, 4} and L0 d = {3, 5, 6}. This allows an imperfect label set to induce topics that the document should be associated with even if they weren’t explicitly enumerated. L2H replaces Labeled LDA’s absolute restriction to specific topics to a soft preference. To achieve this, each document d has a switching variable πd drawn from Beta(γ0, γ1), which effectively decides how likely tokens in d are to be generated from L1 d versus L0 d. Token n in document d is generated by first flipping the biased coin πd to choose the set indicator xd,n. Given xd,n, the label zd,n is drawn from the corresponding label distribution θxd,n d and the token is generated from the corresponding topic wd,n ∼Mult(φzd,n). 3 Posterior Inference Given a set of documents with observed words {wd} and labels {ld}, inference finds the posterior distribution over the latent variables. We use a Markov chain Monte Carlo (MCMC) algorithm to perform posterior inference, in which each iteration after initialization consists of the following steps: (1) sample a set indicator xd,n and topic assignment zd,n for each token, (2) sample a word distribution φk for each node k in the tree, and (3) update the structure of the label tree. Initialization: With the large number of labels, the space of hierarchical structures that MCMC needs to explore is huge. Initializing the tree-structure hierarchy is crucial to help the sampler focus on more important regions of the search space and help the sampler converge. We initialize the hierarchy with the maximum a priori probability tree by running Chu-Liu/Edmonds’ algorithm to find the maximum spanning tree on the graph G starting at the background node. Sampling assignments xd,n and zd,n: For each token, we need to sample whether it was generated from the label set or not, xd,n. We choose label set i with probability C−d,n d,i +γi C−d,n d,· +γ0+γ1 and we sample a node in the chosen set i with probability N −d,n d,k +α C−d,n d,i +α|Li d| · φk,wd,n. Here, Cd,i is the number of times tokens in document d are assigned to label set i; Nd,k is the number of times tokens in document d 3 are assigned to node k. Marginal counts are denoted by ·, and −d,n denotes the counts excluding the assignment of token wd,n. After we have the label set, we can sample the topic assignment. This is more efficient than sampling jointly, as most tokens are in the label set, and there are a limited number of topics in the label set. The probability of assigning node k to zd,n is p(xd,n = i, zd,n = k | x−d,n, z−d,n, φ, Li d) ∝ C−d,n d,i + γi C−d,n d,· + γ0 + γ1 · N −d,n d,k + α C−d,n d,i + α|Li d| · φk,wd,n (1) Sampling topics φ: As discussed in Section 2.1, topics on each path in the hierarchy form a cascaded Dirichlet-multinomial chain where the multinomial φk at node k is drawn from a Dirichlet distribution with the mean vector being the topic φσ(k) at the parent node σ(k). Given assignments of tokens to nodes, we need to determine the conditional probability of a word given the token. This can be done efficiently in two steps: bottom-up smoothing and top-down sampling [2]. • Bottom-up smoothing: This step estimates the counts ˜ Mk,v of node k propagated from its children. This can be approximated efficiently by using the minimal/maximal path assumption [11, 44]. For the minimal path assumption, each child node k′ of k propagates a value of 1 to ˜ Mk,v if Mk′,v > 0. For the maximal path assumption, each child node k′ of k propagates the full count Mk′,v to ˜ Mk,v. • Top-down sampling: After estimating ˜ Mk,v for each node from leaf to root, we sample the word distributions top-down using its actual counts mk, its children’s propagated counts ˜ mk and its parent’s word distribution φσ(k) as φk ∼Dir(mk + ˜ mk + βφσ(k)). Updating tree structure T : We update the tree structure by looping through each non-root node, proposing a new parent node and either accepting or rejecting the proposed parent using the Metropolis-Hastings algorithm. More specifically, given a non-root node k with current parent i, we propose a new parent node j by sampling from all incoming nodes of k in graph G, with probability proportional to the corresponding edge weights. If the proposed parent node j is a descendant of k, we reject the proposal to avoid a cycle. If it is not a descendant, we accept the proposed move with probability min 1, Q(i≺k) Q(j≺k) P (j≺k) P (i≺k) , where Q and P denote the proposal distribution and the model’s joint distribution respectively, and i ≺k denotes the case where i is the parent of k. Since we sample the proposed parent using the edge weights, the proposal probability ratio is Q(i ≺k) Q(j ≺k) = ti,k tj,k (2) The joint probability of L2H’s observed and latent variables is: P = Y e∈E p(e | G) D Y d=1 p(xd | γ)p(zd | xd, ld, α)p(wd | zd, φ) K Y l=1 p(φl | φσ(l), β)p(φroot | β) (3) When node k changes its parent from node i to j, the candidate set L1 d changes for any document d that is tagged with any label in the subtree rooted at k. Let △k denote the subtree rooted at k and D△k = {d | ∃l ∈△k ∧l ∈ld} the set of documents whose candidate set might change when k’s parent changes. Canceling unchanged quantities, the ratio of the joint probabilities is: P(j ≺k) P(i ≺k) = tj,k ti,k Y d∈D△k p(zd | j ≺k) p(zd | i ≺k) p(xd | j ≺k) p(xd | i ≺k) p(wd | j ≺k) p(wd | i ≺k) K Y l=1 p(φl | j ≺k) p(φl | i ≺k) (4) We now expand each factor in Equation 4. The probability of node assignments zd for document d is computed by integrating out the document-topic multinomials θ0 d and θ1 d (for the candidate set and its inverse): p(zd | xd, L0 d, L1 d; α) = Y x∈{0,1} Γ(α|Lx d|) Γ(Cd,x + α|Lx d|) Y l∈Lx d Γ(Nd,l + α) Γ(α) (5) 4 Similarly, we compute the probability of xd for each document d, integrating out πd, p(xd | γ) = Γ(γ0 + γ1) Γ(Cd,· + γ0 + γ1) Y x∈{0,1} Γ(Cd,x + γi) Γ(γx) (6) Since we explicitly sample the topic φl at each node l, we need to re-sample all topics for the case that j is the parent of i to compute the ratio QK l=1 p(φl | j≺k) p(φl | i≺k) . Given the sampled φ, the word likelihood is p(wd | zd, φ) = QNd n=1 φzd,n,wd,n. However, re-sampling the topics for the whole hierarchy for every node proposal is inefficient. To avoid that, we keep all φ’s fixed and approximate the ratio as: Y d∈D△k p(wd | j ≺k) p(wd | i ≺k) K Y l=1 p(φl | j ≺k) p(φl | i ≺k) ≈ R φk p(mk + ˜mk | φk) p(φk | φj) dφk R φk p(mk + ˜mk | φk) p(φk | φi) dφk (7) where mk is the word counts at node k and ˜mk is the word counts propagated from children of k. Since φ is fixed and the node assignments z are unchanged, the word likelihoods cancel out except for tokens assigned at k or any of its children. The integration in Equation 7 is Z φk p(mk + ˜mk | φk) p(φk | φj) dφk = Γ(β) Γ(Mk,· + ˜ Mk,· + β) VY v=1 Γ(Mk,v + ˜ Mk,v + βφi,v) Γ(βφi,v) (8) Using Equations 2 and 4, we can compute the Metropolis-Hastings acceptance probability. 4 Experiments: Analyzing Political Agendas in U.S. Congresses In our experiments, we focus on studying political attention in the legislative process, of interest to both computer scientists [13, 14] and political scientists [15, 34]. GovTrack provides bill text from the US Congress, each of which is assigned multiple political issues by the Congressional Research Service. Examples of Congressional issues include Education, Higher Education, Health, Medicare, etc. To evaluate the effectiveness of L2H, we evaluate on two computational tasks: document modeling—measuring perplexity on a held-out set of documents—and multi-label classification. We also discuss qualitative results based on the label hierarchy learned by our model. Data: We use the text and labels from GovTrack for the 109th through 112th Congresses (2005– 2012). For both quantitative tasks, we perform 5-fold cross-validation. For each fold, we perform standard pre-processing steps on the training set including tokenization, removing stopwords, stemming, adding bigrams, and filtering using TF-IDF to obtain a vocabulary of 10,000 words (final statistics in Figure 3).4 After building the vocabulary from training documents, we discard all out-of-vocabulary words in the test documents. We ignore labels associated with fewer than 100 bills. 4.1 Document modeling In the first quantitative experiment, we focus on the task of predicting the words in held-out test documents, given their labels. This is measured by perplexity, a widely-used evaluation metric [7, 45]. To compute perplexity, we follow the “estimating θ” method described in Wallach et al. [45, Sec. 5.1] and split each test document d into wTE1 d and wTE2 d . During training, we estimate all topics’ distributions over the vocabulary ˆφ. During test, first we run Gibbs sampling using the learned topics on wTE1 d to estimate the topic proportions ˆθTE d for each test document d. Then, we compute the perplexity on the held-out words wTE2 d as exp − P d log(p(w TE2 d | ld,ˆθTE d , ˆφ)) N TE2 where N TE2 is the total number of tokens in wTE2 d . 4We find bigram candidates that occur at least ten times in the training set and use a χ2 test to filter out those having a χ2 value less than 5.0. We then treat selected bigrams as single word types in the vocabulary. 5 Setup We compare our proposed model L2H with the following methods: • LDA [7]: unsupervised topic model with a flat topic structure. In our experiments, we set the number of topics of LDA equal to the number of labels in each dataset. • L-LDA [36]: associates each topic with a label, and a document is generated using the topics associated with the document’s labels only. • L2F (Label to Flat structure): a simplified version of L2H with a fixed, flat topic structure. The major difference between L2F and L-LDA is that L2F allows tokens to be drawn from topics that are not in the document’s label set via the use of the switching variable (Section 2.2). Improvements of L2H over L2F show the importance of the hierarchical structure. For all models, the number of topics is the number of labels in the dataset. We run for 1,000 iterations on the training data with a burn-in period of 500 iterations. After the burn-in period, we store ten sets of estimated parameters, one after every fifty iterations. During test time, we run ten chains using these ten learned models on the test data and compute the perplexity after 100 iterations. The perplexity of each fold is the average value over the ten chains [28]. 13067 14034 13673 12274 418 375 243 205 Number of bills Number of labels 0 5000 10000 0 100 200 300 400 109 110 111 112 109 110 111 112 Congress Figure 3: Dataset statistics G G G G 109 110 111 112 150 200 250 300 Perplexity G LDA L−LDA L2F L2H Figure 4: Perplexity on held-out documents, averaged over 5 folds Results: Figure 4 shows the perplexity of the four models averaged over five folds on the four datasets. LDA outperforms the other models with labels since it can freely optimize the likelihood without additional constraints. L-LDA and L2F are comparable. However, L2H significantly outperforms both L-LDA and L2F. Thus, if incorporating labels into a model, learning an additional topic hierarchy improves predictive power and generalizability of L-LDA. 4.2 Multi-label Classification Multi-label classification is predicting a set of labels for a test document given its text [43, 23, 47]. The prediction is from a set of pre-defined K labels and each document can be tagged with any of the 2K possible subsets. In this experiment, we use M3L—an efficient max-margin multi-label classifier [16]—to study how features extracted from our L2H improve classification. We use F1 as the evaluation metric. The F1 score is first computed for each document d as F1(d) = 2·P (d)·R(d) P (d)+R(d) , where P(d) and R(d) are the precision and recall for document d. After F1(d) is computed for all documents, the overall performance can be summarized by micro-averaging and macro-averaging to obtain Micro-F1 and Macro-F1 respectively. In macro-averaging, F1 is first computed for each document using its own confusion matrix and then averaged. In micro-averaging, on the other hand, only a single confusion matrix is computed for all documents, and the F1 score is computed based on this single confusion matrix [38]. Setup We use the following sets of features: • TF: Each document is represented by a vector of term frequency of all word types in the vocabulary. • TF-IDF: Each document is represented by a vector ψTFIDF d of TF-IDF of all word types. • L-LDA&TF-IDF: Ramage et al. [35] combine L-LDA features and TF-IDF features to improve the performance on recommendation tasks. Likewise, we extract a K-dimensional vector ˆθL-LDA d and combine with TF-IDF vector ψTFIDF d to form the feature vector of L-LDA&TF-IDF.5 5We run L-LDA on train for 1,000 iterations and ten models after 500 burn-in iterations. For each model, we sample assignments for all tokens using 100 iterations and average over chains to estimate ˆθL-LDA d . 6 • L2H&TF-IDF: Similarly, we combine TF-IDF with the features ˆθL2H d = {ˆθ0 d, ˆθ1 d} extracted using L2H (same MCMC setup as L-LDA). One complication for L2H is the candidate label set L1 d, which is not observed during test time. Thus, during test time, we estimate L1 d using TF-IDF. Let Dl be the set of documents tagged with label l. For each l, we compute a TF-IDF vector φTFIDF l = avgd∈DlψTFIDF d . Then for each document d, we generate the k nearest labels using cosine similarity, and add them to the candidate label set L1 d of d. Finally, we expand this initial set by adding all labels on the paths from the root of the learned hierarchy to any of the k nearest labels (Figure 2). We explored different values of k ∈{3, 5, 7, 9}, with similar results; the results in this section are reported with k = 5. G G G G 109 110 111 112 0.45 0.50 0.55 0.60 0.65 Macro F1 G G G G 109 110 111 112 0.5 0.6 Micro F1 G TF TFIDF L−LDA & TFIDF L2H & TFIDF Figure 5: Multi-label classification results. The results are averaged over 5 folds. Results Figure 5 shows classification results. For both Macro-F1 and Micro-F1, TF-IDF, LLDA&TF-IDF and L2H&TF-IDF significantly outperform TF. Also, L-LDA&TF-IDF performs better than TF-IDF, which is consistent with Ramage et al. (2010) [35]. L2H&TF-IDF performs better than L-LDA&TF-IDF, which in turn performs better than TF-IDF. This shows that features extracted from L2H are more predictive than those extracted from L-LDA, and both improve classification. The improvements of L2H&TF-IDF and L-LDA&TF-IDF over TF-IDF are clearer for Macro-F1 compared with Micro-F1. Thus, features from both topic models help improve prediction, regardless of the frequencies of their tagged labels. 4.3 Learned label hierarchy: A taxonomy of Congressional issues Terrorism intellig, intellig_commun, afghanistan, nation_intellig, guantanamo_bai, qaeda, central_intellig, detent, pakistan, interrog, defens_intellig, detaine, Int'l organizations & cooperation export, arm_export, control_act, foreign_assist, cuba, defens_articl, foreign_countri, foreign_servic, export_administr, author_act, munit_list International affairs libya, unit_nation, intern_religi, bahrain, religi_freedom, religi_minor, freedom_act, africa, violenc, secur_council, benghazi, privileg_resolut, hostil, Foreign aid and international relief fund_appropri, foreign_assist, remain_avail, regular_notif, intern_develop, relat_program, unit_nation, pakistan, foreign_oper, usaid, prior_act International law and treaties foreign_assist, intern_develop, vessel, foreign_countri, sanit, appropri_congression, develop_countri, violenc, girl, defens_articl, export Religion unit_nation, israel, iaea, harass, syria, iran, peacekeep_oper, regular_budget, unrwa, palestinian, refuge, durban, bulli, secur_council Europe republ, belaru, turkei, nato, holocaust_survivor, north_atlant, holocaust, european_union, albania, jew, china, macedonia, treati_organ, albanian, greec Middle East syria, israel, iran, enterpris_fund, unit_nation, egypt, palestinian, cypru, tunisia, hezbollah, lebanon, republ, hama, syrian, violenc, weapon, Latin America border_protect, haiti, merchandis, evas, tariff_act, cover_merchandis, export, custom_territori, custom_enforc,, countervail_duti, intern_trade Asia china, vietnam, taiwan, republ, chines, sea, north_korea, tibetan, north_korean, refuge, south_china, intern_religi, tibet, enterpris, religi_freedom Military operations and strategy armi, air_forc, none, navi, addit_amount, control_act, emerg_deficit, fund_appropri, balanc_budget, terror_pursuant, transfer_author,marin_corp Sanctions iran, sanction, syria, comprehens_iran, north_korea, financi_institut, presid_determin, islam_republ, foreign_person, weapon, iran_sanction Human rights traffick, russian_feder, traffick_victim, prison, alien, visa, nation_act, victim, detent, human_traffick, corrupt, russian, foreign_labor, sex_traffick, Department of Defense air_forc, militari_construct, author_act, armi, nation_defens, navi, militari_depart, aircraft, congression_defens, command, sexual_assault, activ_duti Military personnel and dependents coast_guard, vessel, command, special_select, sexual_violenc, academi, sexual_harass, navi, former_offic, gulf_coast, haze, port, marin, marin_debri Armed forces and national security cemeteri, nation_guard, dog, service_memb, homeless_veteran, funer, medic_center, militari_servic, arlington_nation, armi, guard Department of Homeland Security cybersecur, inform_secur, inform_system, cover_critic, critic_infrastructur, inform_infrastructur, cybersecur_threat, Figure 6: A subtree in the hierarchy learned by L2H. The subtree root International Affairs is a child node of the Background root node. To qualitatively analyze the hierarchy learned by our model, Figure 6 shows a subtree whose root is about International Affairs, obtained by running L2H on bills in the 112th U.S. Congress. The learned topic at International Affairs shows the focus of 112th Congress on the Arab Spring—a revolutionary wave of demonstrations and protests in Arab countries like Libya, Bahrain, etc. The concept is then split into two distinctive aspects of international affairs: Military and Diplomacy. 7 We are working with domain experts to formally evaluate the learned concept hierarchy. A political scientist (personal communication) comments: The international affairs topic does an excellent job of capturing the key distinction between military/defense and diplomacy/aid. Even more impressive is that it then also captures the major policy areas within each of these issues: the distinction between traditional military issues and terrorism-related issues, and the distinction between thematic policy (e.g., human rights) and geographic/regional policy. 5 Conclusion We have presented L2H, a model that discovers not just the interaction between overt labels and the latent topics used in a corpus, but also how they fit together in a hierarchy. Hierarchies are a natural way to organize information, and combining labels with a hierarchy provides a mechanism for integrating user knowledge and data-driven summaries in a single, consistent structure. Our experiments show that L2H yields interpretable label/topic structures, that it can substantially improve model perplexity compared to baseline approaches, and that it improves performance on a multi-label prediction task. Acknowledgments We thank Kristina Miler, Ke Zhai, Leo Claudino, and He He for helpful discussions, and thank the anonymous reviewers for insightful comments. This research was supported in part by NSF under grant #1211153 (Resnik) and #1018625 (Boyd-Graber and Resnik). Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the view of the sponsor. References [1] Adams, R., Ghahramani, Z., and Jordan, M. (2010). Tree-structured stick breaking for hierarchical data. In NIPS. [2] Ahmed, A., Hong, L., and Smola, A. (2013). The nested Chinese restaurant franchise process: User tracking and document modeling. In ICML. [3] Bakalov, A., McCallum, A., Wallach, H., and Mimno, D. (2012). Topic models for taxonomies. In JCDL. [4] Blei, D. M. (2012). Probabilistic topic models. Communications of the ACM, 55(4):77–84. [5] Blei, D. M., Griffiths, T. L., Jordan, M. I., and Tenenbaum, J. B. (2003a). Hierarchical topic models and the nested Chinese restaurant process. In NIPS. [6] Blei, D. M. and McAuliffe, J. D. (2007). Supervised topic models. In NIPS. [7] Blei, D. M., Ng, A., and Jordan, M. (2003b). Latent Dirichlet allocation. JMLR, 3. [8] Bragg, J., Mausam, and Weld, D. S. (2013). Crowdsourcing multi-label classification for taxonomy creation. In HCOMP. [9] Chang, J. and Blei, D. M. (2010). Hierarchical relational models for document networks. The Annals of Applied Statistics, 4(1):124–150. [10] Chilton, L. B., Little, G., Edge, D., Weld, D. S., and Landay, J. A. (2013). Cascade: Crowdsourcing taxonomy creation. In CHI. [11] Cowans, P. J. (2006). Probabilistic Document Modelling. PhD thesis, University of Cambridge. [12] Deng, J., Russakovsky, O., Krause, J., Bernstein, M. S., Berg, A., and Fei-Fei, L. (2014). Scalable multi-label annotation. In CHI. [13] Gerrish, S. and Blei, D. M. (2011). Predicting legislative roll calls from text. In ICML. [14] Gerrish, S. and Blei, D. M. (2012). How they vote: Issue-adjusted models of legislative behavior. In NIPS. [15] Grimmer, J. (2010). A Bayesian Hierarchical Topic Model for Political Texts: Measuring Expressed Agendas in Senate Press Releases. Political Analysis, 18(1):1–35. [16] Hariharan, B., Vishwanathan, S. V., and Varma, M. (2012). Efficient max-margin multi-label classification with applications to zero-shot learning. Mach. Learn., 88(1-2):127–155. 8 [17] Heymann, P. and Garcia-Molina, H. (2006). Collaborative creation of communal hierarchical taxonomies in social tagging systems. Technical Report 2006-10, Stanford InfoLab. [18] Lacoste-Julien, S., Sha, F., and Jordan, M. I. (2008). DiscLDA: Discriminative learning for dimensionality reduction and classification. In NIPS, pages 897–904. [19] Li, W., Blei, D. M., and McCallum, A. (2007). Nonparametric Bayes Pachinko allocation. In UAI. [20] Li, W. and McCallum, A. (2006). Pachinko allocation: DAG-structured mixture models of topic correlations. In ICML. [21] Liu, X., Song, Y., Liu, S., and Wang, H. (2012). Automatic taxonomy construction from keywords. In KDD. [22] Mackay, D. J. C. and Peto, L. C. B. (1995). A hierarchical Dirichlet language model. Natural Language Engineering, 1(3):289–308. [23] Madjarov, G., Kocev, D., Gjorgjevikj, D., and Deroski, S. (2012). An extensive experimental comparison of methods for multi-label learning. Pattern Recogn., 45(9):3084–3104. [24] Mimno, D., Li, W., and McCallum, A. (2007). Mixtures of hierarchical topics with Pachinko allocation. In ICML. [25] Mimno, D. M. and McCallum, A. (2008). Topic models conditioned on arbitrary features with Dirichletmultinomial regression. In UAI. [26] Nguyen, V.-A., Boyd-Graber, J., and Resnik, P. (2012). SITS: A hierarchical nonparametric model using speaker identity for topic segmentation in multiparty conversations. In ACL. [27] Nguyen, V.-A., Boyd-Graber, J., and Resnik, P. (2013). Lexical and hierarchical topic regression. In NIPS. [28] Nguyen, V.-A., Boyd-Graber, J., and Resnik, P. (2014). Sometimes average is best: The importance of averaging for prediction using MCMC inference in topic modeling. In EMNLP. [29] Nikolova, S. S., Boyd-Graber, J., and Fellbaum, C. (2011). Collecting Semantic Similarity Ratings to Connect Concepts in Assistive Communication Tools. Studies in Computational Intelligence. Springer. [30] Paisley, J. W., Wang, C., Blei, D. M., and Jordan, M. I. (2012). Nested hierarchical Dirichlet processes. CoRR, abs/1210.6738. [31] Perotte, A. J., Wood, F., Elhadad, N., and Bartlett, N. (2011). Hierarchically supervised latent Dirichlet allocation. In NIPS. [32] Petinot, Y., McKeown, K., and Thadani, K. (2011). A hierarchical model of web summaries. In HLT. [33] Plangprasopchok, A. and Lerman, K. (2009). Constructing folksonomies from user-specified relations on Flickr. In WWW. [34] Quinn, K. M., Monroe, B. L., Colaresi, M., Crespin, M. H., and Radev, D. R. (2010). How to analyze political attention with minimal assumptions and costs. American Journal of Political Science, 54(1):209–228. [35] Ramage, D., Dumais, S. T., and Liebling, D. J. (2010). Characterizing microblogs with topic models. In ICWSM. [36] Ramage, D., Hall, D., Nallapati, R., and Manning, C. (2009). Labeled LDA: A supervised topic model for credit attribution in multi-labeled corpora. In EMNLP. [37] Rosen-Zvi, M., Griffiths, T. L., Steyvers, M., and Smyth, P. (2004). The author-topic model for authors and documents. In UAI. [38] Rubin, T. N., Chambers, A., Smyth, P., and Steyvers, M. (2012). Statistical topic models for multi-label document classification. Mach. Learn., 88(1-2):157–208. [39] Schmitz, P. (2006). Inducing ontology from Flickr tags. In WWW 2006. [40] Slutsky, A., Hu, X., and An, Y. (2013). Tree labeled LDA: A hierarchical model for web summaries. In IEEE International Conference on Big Data, pages 134–140. [41] Teh, Y. W. (2006). A hierarchical Bayesian language model based on Pitman-Yor processes. In ACL. [42] Tibely, G., Pollner, P., Vicsek, T., and Palla, G. (2013). Extracting tag hierarchies. PLoS ONE, 8(12):e84133. [43] Tsoumakas, G., Katakis, I., and Vlahavas, I. P. (2010). Mining multi-label data. In Data Mining and Knowledge Discovery Handbook. [44] Wallach, H. M. (2008). Structured Topic Models for Language. PhD thesis, University of Cambridge. [45] Wallach, H. M., Murray, I., Salakhutdinov, R., and Mimno, D. (2009). Evaluation methods for topic models. In ICML. [46] Wang, C., Blei, D., and Fei-Fei, L. (2009). Simultaneous image classification and annotation. In CVPR. [47] Zhang, M.-L. and Zhou, Z.-H. (2014). A review on multi-label learning algorithms. IEEE TKDE, 26(8). [48] Zhu, J., Ahmed, A., and Xing, E. P. (2009). MedLDA: maximum margin supervised topic models for regression and classification. In ICML. 9
|
2014
|
139
|
5,224
|
Recurrent Models of Visual Attention Volodymyr Mnih Nicolas Heess Alex Graves Koray Kavukcuoglu Google DeepMind {vmnih,heess,gravesa,korayk} @ google.com Abstract Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting a sequence of regions or locations and only processing the selected regions at high resolution. Like convolutional neural networks, the proposed model has a degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model is non-differentiable, it can be trained using reinforcement learning methods to learn task-specific policies. We evaluate our model on several image classification tasks, where it significantly outperforms a convolutional neural network baseline on cluttered images, and on a dynamic visual control problem, where it learns to track a simple object without an explicit training signal for doing so. 1 Introduction Neural network-based architectures have recently had great success in significantly advancing the state of the art on challenging image classification and object detection datasets [8, 12, 19]. Their excellent recognition accuracy, however, comes at a high computational cost both at training and testing time. The large convolutional neural networks typically used currently take days to train on multiple GPUs even though the input images are downsampled to reduce computation [12]. In the case of object detection processing a single image at test time currently takes seconds when running on a single GPU [8, 19] as these approaches effectively follow the classical sliding window paradigm from the computer vision literature where a classifier, trained to detect an object in a tightly cropped bounding box, is applied independently to thousands of candidate windows from the test image at different positions and scales. Although some computations can be shared, the main computational expense for these models comes from convolving filter maps with the entire input image, therefore their computational complexity is at least linear in the number of pixels. One important property of human perception is that one does not tend to process a whole scene in its entirety at once. Instead humans focus attention selectively on parts of the visual space to acquire information when and where it is needed, and combine information from different fixations over time to build up an internal representation of the scene [18], guiding future eye movements and decision making. Focusing the computational resources on parts of a scene saves “bandwidth” as fewer “pixels” need to be processed. But it also substantially reduces the task complexity as the object of interest can be placed in the center of the fixation and irrelevant features of the visual environment (“clutter”) outside the fixated region are naturally ignored. In line with its fundamental role, the guidance of human eye movements has been extensively studied in neuroscience and cognitive science literature. While low-level scene properties and bottom up processes (e.g. in the form of saliency; [11]) play an important role, the locations on which humans fixate have also been shown to be strongly task specific (see [9] for a review and also e.g. [15, 22]). In this paper we take inspiration from these results and develop a novel framework for attention-based task-driven visual processing with neural networks. Our model considers attention-based processing 1 of a visual scene as a control problem and is general enough to be applied to static images, videos, or as a perceptual module of an agent that interacts with a dynamic visual environment (e.g. robots, computer game playing agents). The model is a recurrent neural network (RNN) which processes inputs sequentially, attending to different locations within the images (or video frames) one at a time, and incrementally combines information from these fixations to build up a dynamic internal representation of the scene or environment. Instead of processing an entire image or even bounding box at once, at each step, the model selects the next location to attend to based on past information and the demands of the task. Both the number of parameters in our model and the amount of computation it performs can be controlled independently of the size of the input image, which is in contrast to convolutional networks whose computational demands scale linearly with the number of image pixels. We describe an end-to-end optimization procedure that allows the model to be trained directly with respect to a given task and to maximize a performance measure which may depend on the entire sequence of decisions made by the model. This procedure uses backpropagation to train the neural-network components and policy gradient to address the non-differentiabilities due to the control problem. We show that our model can learn effective task-specific strategies for where to look on several image classification tasks as well as a dynamic visual control problem. Our results also suggest that an attention-based model may be better than a convolutional neural network at both dealing with clutter and scaling up to large input images. 2 Previous Work Computational limitations have received much attention in the computer vision literature. For instance, for object detection, much work has been dedicated to reducing the cost of the widespread sliding window paradigm, focusing primarily on reducing the number of windows for which the full classifier is evaluated, e.g. via classifier cascades (e.g. [7, 24]), removing image regions from consideration via a branch and bound approach on the classifier output (e.g. [13]), or by proposing candidate windows that are likely to contain objects (e.g. [1, 23]). Even though substantial speedups may be obtained with such approaches, and some of these can be combined with or used as an add-on to CNN classifiers [8], they remain firmly rooted in the window classifier design for object detection and only exploit past information to inform future processing of the image in a very limited way. A second class of approaches that has a long history in computer vision and is strongly motivated by human perception are saliency detectors (e.g. [11]). These approaches prioritize the processing of potentially interesting (“salient”) image regions which are typically identified based on some measure of local low-level feature contrast. Saliency detectors indeed capture some of the properties of human eye movements, but they typically do not to integrate information across fixations, their saliency computations are mostly hardwired, and they are based on low-level image properties only, usually ignoring other factors such as semantic content of a scene and task demands (but see [22]). Some works in the computer vision literature and elsewhere e.g. [2, 4, 6, 14, 16, 17, 20] have embraced vision as a sequential decision task as we do here. There, as in our work, information about the image is gathered sequentially and the decision where to attend next is based on previous fixations of the image. [4] employs the learned Bayesian observer model from [5] to the task of object detection. The learning framework of [5] is related to ours as they also employ a policy gradient formulation (cf. section 3) but their overall setup is considerably more restrictive than ours and only some parts of the system are learned. Our work is perhaps the most similar to the other attempts to implement attentional processing in a deep learning framework [6, 14, 17]. Our formulation which employs an RNN to integrate visual information over time and to decide how to act is, however, more general, and our learning procedure allows for end-to-end optimization of the sequential decision process instead of relying on greedy action selection. We further demonstrate how the same general architecture can be used for efficient object recognition in still images as well as to interact with a dynamic visual environment in a task-driven way. 3 The Recurrent Attention Model (RAM) In this paper we consider the attention problem as the sequential decision process of a goal-directed agent interacting with a visual environment. At each point in time, the agent observes the environment only via a bandwidth-limited sensor, i.e. it never senses the environment in full. It may extract 2 lt-1 gt Glimpse Sensor xt ρ(xt , lt-1) θg 0 θg 1 θg 2 Glimpse Network : fg( θg ) lt-1 gt lt at lt gt+1 lt+1 at+1 ht ht+1 fg(θg) ht-1 fl(θl) fa(θa) fh(θh) fg(θg) fl(θl) fa(θa) fh(θh) xt ρ(xt , lt-1) lt-1 Glimpse Sensor A) B) C) Figure 1: A) Glimpse Sensor: Given the coordinates of the glimpse and an input image, the sensor extracts a retina-like representation ρ(xt, lt−1) centered at lt−1 that contains multiple resolution patches. B) Glimpse Network: Given the location (lt−1) and input image (xt), uses the glimpse sensor to extract retina representation ρ(xt, lt−1). The retina representation and glimpse location is then mapped into a hidden space using independent linear layers parameterized by θ0 g and θ1 g respectively using rectified units followed by another linear layer θ2 g to combine the information from both components. The glimpse network fg(.; {θ0 g, θ1 g, θ2 g}) defines a trainable bandwidth limited sensor for the attention network producing the glimpse representation gt. C) Model Architecture: Overall, the model is an RNN. The core network of the model fh(.; θh) takes the glimpse representation gt as input and combining with the internal representation at previous time step ht−1, produces the new internal state of the model ht. The location network fl(.; θl) and the action network fa(.; θa) use the internal state ht of the model to produce the next location to attend to lt and the action/classification at respectively. This basic RNN iteration is repeated for a variable number of steps. information only in a local region or in a narrow frequency band. The agent can, however, actively control how to deploy its sensor resources (e.g. choose the sensor location). The agent can also affect the true state of the environment by executing actions. Since the environment is only partially observed the agent needs to integrate information over time in order to determine how to act and how to deploy its sensor most effectively. At each step, the agent receives a scalar reward (which depends on the actions the agent has executed and can be delayed), and the goal of the agent is to maximize the total sum of such rewards. This formulation encompasses tasks as diverse as object detection in static images and control problems like playing a computer game from the image stream visible on the screen. For a game, the environment state would be the true state of the game engine and the agent’s sensor would operate on the video frame shown on the screen. (Note that for most games, a single frame would not fully specify the game state). The environment actions here would correspond to joystick controls, and the reward would reflect points scored. For object detection in static images the state of the environment would be fixed and correspond to the true contents of the image. The environmental action would correspond to the classification decision (which may be executed only after a fixed number of fixations), and the reward would reflect if the decision is correct. 3.1 Model The agent is built around a recurrent neural network as shown in Fig. 1. At each time step, it processes the sensor data, integrates information over time, and chooses how to act and how to deploy its sensor at next time step: Sensor: At each step t the agent receives a (partial) observation of the environment in the form of an image xt. The agent does not have full access to this image but rather can extract information from xt via its bandwidth limited sensor ρ, e.g. by focusing the sensor on some region or frequency band of interest. In this paper we assume that the bandwidth-limited sensor extracts a retina-like representation ρ(xt, lt−1) around location lt−1 from image xt. It encodes the region around l at a high-resolution but uses a progressively lower resolution for pixels further from l, resulting in a vector of much 3 lower dimensionality than the original image x. We will refer to this low-resolution representation as a glimpse [14]. The glimpse sensor is used inside what we call the glimpse network fg to produce the glimpse feature vector gt = fg(xt, lt−1; θg) where θg = {θ0 g, θ1 g, θ2 g} (Fig. 1B). Internal state: The agent maintains an interal state which summarizes information extracted from the history of past observations; it encodes the agent’s knowledge of the environment and is instrumental to deciding how to act and where to deploy the sensor. This internal state is formed by the hidden units ht of the recurrent neural network and updated over time by the core network: ht = fh(ht−1, gt; θh). The external input to the network is the glimpse feature vector gt. Actions: At each step, the agent performs two actions: it decides how to deploy its sensor via the sensor control lt, and an environment action at which might affect the state of the environment. The nature of the environment action depends on the task. In this work, the location actions are chosen stochastically from a distribution parameterized by the location network fl(ht; θl) at time t: lt ∼p(·|fl(ht; θl)). The environment action at is similarly drawn from a distribution conditioned on a second network output at ∼p(·|fa(ht; θa)). For classification it is formulated using a softmax output and for dynamic environments, its exact formulation depends on the action set defined for that particular environment (e.g. joystick movements, motor control, ...). Finally, our model can also be augmented with an additional action that decides when it will stop taking glimpses. This could, for example, be used to learn a cost-sensitive classifier by giving the agent a negative reward for each glimpse it takes, forcing it to trade off making correct classifications with the cost of taking more glimpses. Reward: After executing an action the agent receives a new visual observation of the environment xt+1 and a reward signal rt+1. The goal of the agent is to maximize the sum of the reward signal1 which is usually very sparse and delayed: R = PT t=1 rt. In the case of object recognition, for example, rT = 1 if the object is classified correctly after T steps and 0 otherwise. The above setup is a special instance of what is known in the RL community as a Partially Observable Markov Decision Process (POMDP). The true state of the environment (which can be static or dynamic) is unobserved. In this view, the agent needs to learn a (stochastic) policy π((lt, at)|s1:t; θ) with parameters θ that, at each step t, maps the history of past interactions with the environment s1:t = x1, l1, a1, . . . xt−1, lt−1, at−1, xt to a distribution over actions for the current time step, subject to the constraint of the sensor. In our case, the policy π is defined by the RNN outlined above, and the history st is summarized in the state of the hidden units ht. We will describe the specific choices for the above components in Section 4. 3.2 Training The parameters of our agent are given by the parameters of the glimpse network, the core network (Fig. 1C), and the action network θ = {θg, θh, θa} and we learn these to maximize the total reward the agent can expect when interacting with the environment. More formally, the policy of the agent, possibly in combination with the dynamics of the environment (e.g. for game-playing), induces a distribution over possible interaction sequences s1:N and we aim to maximize the reward under this distribution: J(θ) = Ep(s1:T ;θ) hPT t=1 rt i = Ep(s1:T ;θ) [R], where p(s1:T ; θ) depends on the policy Maximizing J exactly is non-trivial since it involves an expectation over the high-dimensional interaction sequences which may in turn involve unknown environment dynamics. Viewing the problem as a POMDP, however, allows us to bring techniques from the RL literature to bear: As shown by Williams [26] a sample approximation to the gradient is given by ∇θJ = T X t=1 Ep(s1:T ;θ) [∇θ log π(ut|s1:t; θ)R] ≈1 M M X i=1 T X t=1 ∇θ log π(ui t|si 1:t; θ)Ri, (1) where si’s are interaction sequences obtained by running the current agent πθ for i = 1 . . . M episodes. 1Depending on the scenario it may be more appropriate to consider a sum of discounted rewards, where rewards obtained in the distant future contribute less: R = PT t=1 γt−1rt. In this case we can have T →∞. 4 The learning rule (1) is also known as the REINFORCE rule, and it involves running the agent with its current policy to obtain samples of interaction sequences s1:T and then adjusting the parameters θ of our agent such that the log-probability of chosen actions that have led to high cumulative reward is increased, while that of actions having produced low reward is decreased. Eq. (1) requires us to compute ∇θ log π(ui t|si 1:t; θ). But this is just the gradient of the RNN that defines our agent evaluated at time step t and can be computed by standard backpropagation [25]. Variance Reduction : Equation (1) provides us with an unbiased estimate of the gradient but it may have high variance. It is therefore common to consider a gradient estimate of the form 1 M M X i=1 T X t=1 ∇θ log π(ui t|si 1:t; θ) Ri t −bt , (2) where Ri t = PT t′=1 ri t′ is the cumulative reward obtained following the execution of action ui t, and bt is a baseline that may depend on si 1:t (e.g. via hi t) but not on the action ui t itself. This estimate is equal to (1) in expectation but may have lower variance. It is natural to select bt = Eπ [Rt] [21], and this form of baseline known as the value function in the reinforcement learning literature. The resulting algorithm increases the log-probability of an action that was followed by a larger than expected cumulative reward, and decreases the probability if the obtained cumulative reward was smaller. We use this type of baseline and learn it by reducing the squared error between Ri t’s and bt. Using a Hybrid Supervised Loss: The algorithm described above allows us to train the agent when the “best” actions are unknown, and the learning signal is only provided via the reward. For instance, we may not know a priori which sequence of fixations provides most information about an unknown image, but the total reward at the end of an episode will give us an indication whether the tried sequence was good or bad. However, in some situations we do know the correct action to take: For instance, in an object detection task the agent has to output the label of the object as the final action. For the training images this label will be known and we can directly optimize the policy to output the correct label associated with a training image at the end of an observation sequence. This can be achieved, as is common in supervised learning, by maximizing the conditional probability of the true label given the observations from the image, i.e. by maximizing log π(a∗ T |s1:T ; θ), where a∗ T corresponds to the ground-truth label(-action) associated with the image from which observations s1:T were obtained. We follow this approach for classification problems where we optimize the cross entropy loss to train the action network fa and backpropagate the gradients through the core and glimpse networks. The location network fl is always trained with REINFORCE. 4 Experiments We evaluated our approach on several image classification tasks as well as a simple game. We first describe the design choices that were common to all our experiments: Retina and location encodings: The retina encoding ρ(x, l) extracts k square patches centered at location l, with the first patch being gw × gw pixels in size, and each successive patch having twice the width of the previous. The k patches are then all resized to gw × gw and concatenated. Glimpse locations l were encoded as real-valued (x, y) coordinates2 with (0, 0) being the center of the image x and (−1, −1) being the top left corner of x. Glimpse network: The glimpse network fg(x, l) had two fully connected layers. Let Linear(x) denote a linear transformation of the vector x, i.e. Linear(x) = Wx+b for some weight matrix W and bias vector b, and let Rect(x) = max(x, 0) be the rectifier nonlinearity. The output g of the glimpse network was defined as g = Rect(Linear(hg) + Linear(hl)) where hg = Rect(Linear(ρ(x, l))) and hl = Rect(Linear(l)). The dimensionality of hg and hl was 128 while the dimensionality of g was 256 for all attention models trained in this paper. Location network: The policy for the locations l was defined by a two-component Gaussian with a fixed variance. The location network outputs the mean of the location policy at time t and is defined as fl(h) = Linear(h) where h is the state of the core network/RNN. 2We also experimented with using a discrete representation for the locations l but found that it was difficult to learn policies over more than 25 possible discrete locations. 5 (a) 28x28 MNIST Model Error FC, 2 layers (256 hiddens each) 1.69% Convolutional, 2 layers 1.21% RAM, 2 glimpses, 8 × 8, 1 scale 3.79% RAM, 3 glimpses, 8 × 8, 1 scale 1.51% RAM, 4 glimpses, 8 × 8, 1 scale 1.54% RAM, 5 glimpses, 8 × 8, 1 scale 1.34% RAM, 6 glimpses, 8 × 8, 1 scale 1.12% RAM, 7 glimpses, 8 × 8, 1 scale 1.07% (b) 60x60 Translated MNIST Model Error FC, 2 layers (64 hiddens each) 6.42% FC, 2 layers (256 hiddens each) 2.63% Convolutional, 2 layers 1.62% RAM, 4 glimpses, 12 × 12, 3 scales 1.54% RAM, 6 glimpses, 12 × 12, 3 scales 1.22% RAM, 8 glimpses, 12 × 12, 3 scales 1.2% Table 1: Classification results on the MNIST and Translated MNIST datasets. FC denotes a fullyconnected network with two layers of rectifier units. The convolutional network had one layer of 8 10 × 10 filters with stride 5, followed by a fully connected layer with 256 units with rectifiers after each layer. Instances of the attention model are labeled with the number of glimpses, the number of scales in the retina, and the size of the retina. (a) Translated MNIST inputs. (b) Cluttered Translated MNIST inputs. Figure 2: Examples of test cases for the Translated and Cluttered Translated MNIST tasks. Core network: For the classification experiments that follow the core fh was a network of rectifier units defined as ht = fh(ht−1) = Rect(Linear(ht−1) + Linear(gt)). The experiment done on a dynamic environment used a core of LSTM units [10]. 4.1 Image Classification The attention network used in the following classification experiments made a classification decision only at the last timestep t = N. The action network fa was simply a linear softmax classifier defined as fa(h) = exp (Linear(h)) /Z, where Z is a normalizing constant. The RNN state vector h had dimensionality 256. All methods were trained using stochastic gradient descent with minibatches of size 20 and momentum of 0.9. We annealed the learning rate linearly from its initial value to 0 over the course of training. Hyperparameters such as the initial learning rate and the variance of the location policy were selected using random search [3]. The reward at the last time step was 1 if the agent classified correctly and 0 otherwise. The rewards for all other timesteps were 0. Centered Digits: We first tested the ability of our training method to learn successful glimpse policies by using it to train RAM models with up to 7 glimpses on the MNIST digits dataset. The “retina” for this experiment was simply an 8×8 patch, which is only big enough to capture a part of a digit, hence the experiment also tested the ability of RAM to combine information from multiple glimpses. We also trained standard feedforward and convolutional neural networks with two hidden layers as a baselines. The error rates achieved by the different models on the test set are shown in Table 1a. We see that the performance of RAM generally improves with more glimpses, and that it eventually outperforms a the baseline models trained on the full 28 × 28 centered digits. This demonstrates the model can successfully learn to combine information from multiple glimpses. Non-Centered Digits: The second problem we considered was classifying non-centered digits. We created a new task called Translated MNIST, for which data was generated by placing an MNIST digit in a random location of a larger blank patch. Training cases were generated on the fly so the effective training set size was 50000 (the size of the MNIST training set) multiplied by the possible number of locations. Figure 2a contains a random sample of test cases for the 60 by 60 Translated MNIST task. Table 1b shows the results for several different models trained on the Translated MNIST task with 60 by 60 patches. In addition to RAM and two fully-connected networks we also trained a network with one convolutional layer of 16 10 × 10 filters with stride 5 followed by a rectifier nonlinearity and then a fully-connected layer of 256 rectifier units. The convolutional network, the RAM networks, and the smaller fully connected model all had roughly the same number of parameters. Since the convolutional network has some degree of translation invariance built in, it 6 (a) 60x60 Cluttered Translated MNIST Model Error FC, 2 layers (64 hiddens each) 28.58% FC, 2 layers (256 hiddens each) 11.96% Convolutional, 2 layers 8.09% RAM, 4 glimpses, 12 × 12, 3 scales 4.96% RAM, 6 glimpses, 12 × 12, 3 scales 4.08% RAM, 8 glimpses, 12 × 12, 3 scales 4.04% RAM, 8 random glimpses 14.4% (b) 100x100 Cluttered Translated MNIST Model Error Convolutional, 2 layers 14.35% RAM, 4 glimpses, 12 × 12, 4 scales 9.41% RAM, 6 glimpses, 12 × 12, 4 scales 8.31% RAM, 8 glimpses, 12 × 12, 4 scales 8.11% RAM, 8 random glimpses 28.4% Table 2: Classification on the Cluttered Translated MNIST dataset. FC denotes a fully-connected network with two layers of rectifier units. The convolutional network had one layer of 8 10 × 10 filters with stride 5, followed by a fully connected layer with 256 units in the 60 × 60 case and 86 units in the 100 × 100 case with rectifiers after each layer. Instances of the attention model are labeled with the number of glimpses, the size of the retina, and the number of scales in the retina. All models except for the big fully connected network had roughly the same number of parameters. Figure 3: Examples of the learned policy on 60 × 60 cluttered-translated MNIST task. Column 1: The input image with glimpse path overlaid in green. Columns 2-7: The six glimpses the network chooses. The center of each image shows the full resolution glimpse, the outer low resolution areas are obtained by upscaling the low resolution glimpses back to full image size. The glimpse paths clearly show that the learned policy avoids computation in empty or noisy parts of the input space and directly explores the area around the object of interest. attains a significantly lower error rate of 1.62% than the fully connected networks. However, RAM with 4 glimpses gets slightly better performance than the convolutional network and outperforms it further for 6 and 8 glimpses, reaching 1.2% error. This is possible because the attention model can focus its retina on the digit and hence learn a translation invariant policy. This experiment also shows that the attention model is able to successfully search for an object in a big image when the object is not centered. Cluttered Non-Centered Digits: One of the most challenging aspects of classifying real-world images is the presence of a wide range clutter. Systems that operate on the entire image at full resolution are particularly susceptible to clutter and must learn to be invariant to it. One possible advantage of an attention mechanism is that it may make it easier to learn in the presence of clutter by focusing on the relevant part of the image and ignoring the irrelevant part. We test this hypothesis with several experiments on a new task we call Cluttered Translated MNIST. Data for this task was generated by first placing an MNIST digit in a random location of a larger blank image and then adding random 8 by 8 subpatches from other random MNIST digits to random locations of the image. The goal is to classify the complete digit present in the image. Figure 2b shows a random sample of test cases for the 60 by 60 Cluttered Translated MNIST task. Table 2a shows the classification results for the models we trained on 60 by 60 Cluttered Translated MNIST with 4 pieces of clutter. The presence of clutter makes the task much more difficult but the performance of the attention model is affected less than the performance of the other models. RAM with 4 glimpses reaches 4.96% error, which outperforms fully-connected models by a wide margin and the convolutional neural network by over 3%, and RAM trained with 6 and 8 glimpses achieves even lower error. Since RAM achieves larger relative error improvements over a convolutional network in the presence of clutter these results suggest the attention-based models may be better at dealing with clutter than convolutional networks because they can simply ignore it by not looking at it. Two samples of learned policy is shown in Figure 3 and more are included in the supplementary materials. The first column shows the original data point with the glimpse path overlaid. The 7 location of the first glimpse is marked with a filled circle and the location of the final glimpse is marked with an empty circle. The intermediate points on the path are traced with solid straight lines. Each consecutive image to the right shows a representation of the glimpse that the network sees. It can be seen that the learned policy can reliably find and explore around the object of interest while avoiding clutter at the same time. Finally, Table 2a also includes results for an 8-glimpse RAM model that selects glimpse locations uniformly at random. RAM models that learn the glimpse policy achieve much lower error rates even with half as many glimpses. To further test this hypothesis we also performed experiments on 100 by 100 Cluttered Translated MNIST with 8 pieces of clutter. The test errors achieved by the models we compared are shown in Table 2b. The results show similar improvements of RAM over a convolutional network. It has to be noted that the overall capacity and the amount of computation of our model does not change from 60 × 60 images to 100 × 100, whereas the hidden layer of the convolutional network that is connected to the linear layer grows linearly with the number of pixels in the input. 4.2 Dynamic Environments One appealing property of the recurrent attention model is that it can be applied to videos or interactive problems with a visual input just as easily as to static image tasks. We test the ability of our approach to learn a control policy in a dynamic visual environment while perceiving the environment through a bandwidth-limited retina by training it to play a simple game. The game is played on a 24 by 24 screen of binary pixels and involves two objects: a single pixel that represents a ball falling from the top of the screen while bouncing off the sides of the screen and a two-pixel paddle positioned at the bottom of the screen which the agent controls with the aim of catching the ball. When the falling pixel reaches the bottom of the screen the agent either gets a reward of 1 if the paddle overlaps with the ball and a reward of 0 otherwise. The game then restarts from the beginning. We trained the recurrent attention model to play the game of “Catch” using only the final reward as input. The network had a 6 by 6 retina at three scales as its input, which means that the agent had to capture the ball in the 6 by 6 highest resolution region in order to know its precise position. In addition to the two location actions, the attention model had three game actions (left, right, and do nothing) and the action network fa used a linear softmax to model a distribution over the game actions. We used a core network of 256 LSTM units. We performed random search to find suitable hyper-parameters and trained each agent for 20 million frames. A video of the best agent, which catches the ball roughly 85% of the time, can be downloaded from http://www.cs.toronto.edu/˜vmnih/docs/attention.mov. The video shows that the recurrent attention model learned to play the game by tracking the ball near the bottom of the screen. Since the agent was not in any way told to track the ball and was only rewarded for catching it, this result demonstrates the ability of the model to learn effective task-specific attention policies. 5 Discussion This paper introduced a novel visual attention model that is formulated as a single recurrent neural network which takes a glimpse window as its input and uses the internal state of the network to select the next location to focus on as well as to generate control signals in a dynamic environment. Although the model is not differentiable, the proposed unified architecture is trained end-to-end from pixel inputs to actions using a policy gradient method. The model has several appealing properties. First, both the number of parameters and the amount of computation RAM performs can be controlled independently of the size of the input images. Second, the model is able to ignore clutter present in an image by centering its retina on the relevant regions. Our experiments show that RAM significantly outperforms a convolutional architecture with a comparable number of parameters on a cluttered object classification task. Additionally, the flexibility of our approach allows for a number of interesting extensions. For example, the network can be augmented with another action that allows it terminate at any time point and make a final classification decision. Our preliminary experiments show that this allows the network to learn to stop taking glimpses once it has enough information to make a confident classification. The network can also be allowed to control the scale at which the retina samples the image allowing it to fit objects of different size in the fixed size retina. In both cases, the extra actions can be simply added to the action network fa and trained using the policy gradient procedure we have described. Given the encouraging results achieved by RAM, applying the model to large scale object recognition and video classification is a natural direction for future work. 8 References [1] Bogdan Alexe, Thomas Deselaers, and Vittorio Ferrari. What is an object? In CVPR, 2010. [2] Bogdan Alexe, Nicolas Heess, Yee Whye Teh, and Vittorio Ferrari. Searching for objects driven by context. In NIPS, 2012. [3] James Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization. The Journal of Machine Learning Research, 13:281–305, 2012. [4] Nicholas J. Butko and Javier R. Movellan. Optimal scanning for faster object detection. In CVPR, 2009. [5] N.J. Butko and J.R. Movellan. I-pomdp: An infomax model of eye movement. In Proceedings of the 7th IEEE International Conference on Development and Learning, ICDL ’08, pages 139 –144, 2008. [6] Misha Denil, Loris Bazzani, Hugo Larochelle, and Nando de Freitas. Learning where to attend with deep architectures for image tracking. Neural Computation, 24(8):2151–2184, 2012. [7] Pedro F. Felzenszwalb, Ross B. Girshick, and David A. McAllester. Cascade object detection with deformable part models. In CVPR, 2010. [8] Ross B. Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. CoRR, abs/1311.2524, 2013. [9] Mary Hayhoe and Dana Ballard. Eye movements in natural behavior. Trends in Cognitive Sciences, 9(4):188 – 194, 2005. [10] Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735– 1780, 1997. [11] L. Itti, C. Koch, and E. Niebur. A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(11):1254–1259, 1998. [12] Alex Krizhevsky, Ilya Sutskever, and Geoff Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25, pages 1106–1114, 2012. [13] Christoph H. Lampert, Matthew B. Blaschko, and Thomas Hofmann. Beyond sliding windows: Object localization by efficient subwindow search. In CVPR, 2008. [14] Hugo Larochelle and Geoffrey E. Hinton. Learning to combine foveal glimpses with a third-order boltzmann machine. In NIPS, 2010. [15] Stefan Mathe and Cristian Sminchisescu. Action from still image dataset and inverse optimal control to learn task specific visual scanpaths. In NIPS, 2013. [16] Lucas Paletta, Gerald Fritz, and Christin Seifert. Q-learning of sequential attention for visual object recognition from informative local descriptors. In CVPR, 2005. [17] M. Ranzato. On Learning Where To Look. ArXiv e-prints, 2014. [18] Ronald A. Rensink. The dynamic representation of scenes. Visual Cognition, 7(1-3):17–42, 2000. [19] Pierre Sermanet, David Eigen, Xiang Zhang, Micha¨el Mathieu, Rob Fergus, and Yann LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. CoRR, abs/1312.6229, 2013. [20] Kenneth O. Stanley and Risto Miikkulainen. Evolving a roving eye for go. In GECCO, 2004. [21] Richard S. Sutton, David Mcallester, Satinder Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. In NIPS, pages 1057–1063. MIT Press, 2000. [22] Antonio Torralba, Aude Oliva, Monica S Castelhano, and John M Henderson. Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search. Psychol Rev, pages 766–786, 2006. [23] K E A van de Sande, J.R.R. Uijlings, T Gevers, and A.W.M. Smeulders. Segmentation as Selective Search for Object Recognition. In ICCV, 2011. [24] Paul A. Viola and Michael J. Jones. Rapid object detection using a boosted cascade of simple features. In CVPR, 2001. [25] Daan Wierstra, Alexander Foerster, Jan Peters, and Juergen Schmidhuber. Solving deep memory pomdps with recurrent policy gradients. In ICANN. 2007. [26] R.J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8(3):229–256, 1992. 9
|
2014
|
14
|
5,225
|
Dependent nonparametric trees for dynamic hierarchical clustering Avinava Dubey∗†, Qirong Ho∗‡, Sinead Williamson£, Eric P. Xing† † Machine Learning Department, Carnegie Mellon University ‡ Institute for Infocomm Research, A*STAR £ McCombs School of Business, University of Texas at Austin akdubey@cs.cmu.edu, hoqirong@gmail.com sinead.williamson@mccombs.utexas.edu, epxing@cs.cmu.edu Abstract Hierarchical clustering methods offer an intuitive and powerful way to model a wide variety of data sets. However, the assumption of a fixed hierarchy is often overly restrictive when working with data generated over a period of time: We expect both the structure of our hierarchy, and the parameters of the clusters, to evolve with time. In this paper, we present a distribution over collections of time-dependent, infinite-dimensional trees that can be used to model evolving hierarchies, and present an efficient and scalable algorithm for performing approximate inference in such a model. We demonstrate the efficacy of our model and inference algorithm on both synthetic data and real-world document corpora. 1 Introduction Hierarchically structured clustering models offer a natural representation for many forms of data. For example, we may wish to hierarchically cluster animals, where “dog” and “cat” are subcategories of “mammal”, and “poodle” and “dachshund” are subcategories of “dog”. When modeling scientific articles, articles about machine learning and programming languages may be subcategories under computer science. Representing clusters in a tree structure allows us to explicitly capture these relationships, and allow clusters that are closer in tree-distance to have more similar parameters. Since hierarchical structures occur commonly, there exists a rich literature on statistical models for trees. We are interested in nonparametric distributions over trees – that is, distributions over trees with infinitely many leaves and infinitely many internal nodes. We can model any finite data set using a finite subset of such a tree, marginalizing over the infinitely many unoccupied branches. The advantage of such an approach is that we do not have to specify the tree dimensionality in advance, and can grow our representation in a consistent manner if we observe more data. In many settings, our data points are associated with a point in time – for example the date when a photograph was taken or an article was written. A stationary clustering model is inappropriate in such a context: The number of clusters may change over time; the relative popularities of clusters may vary; and the location of each cluster in parameter space may change. As an example, consider a topic model for scientific articles over the twentieth century. The field of computer science – and therefore topics related to it – did not exist in the first half of the century. The proportion of scientific articles devoted to genetics has likely increased over the century, and the terminology used in such articles has changed with the development of new sequencing technology. Despite this, to the best of our knowledge, there are no nonparametric distributions over timeevolving trees in the literature. There exist a variety of distributions over stationary trees [1, 14, 5, 13, 10], and time-evolving non-hierarchical clustering models [16, 7, 11, 2, 4, 12] – but no models that combine time evolution and hierarchical structure. The reason for this is likely to be practical: Inference in trees is typically very computationally intensive, and adding temporal variation will, in general, increase the computational requirements. Designing such a model must, therefore, proceed hand in hand with developing efficient and scalable inference schemes. 1 (a) Infinite tree (b) Changing popularity (c) Cluster/topic drift Figure 1: Our dependent tree-structured stick breaking process can model trees of arbitrary size and shape, and captures popularity and parameter changes through time. a) Model any number of nodes (clusters, topics), of any branching factor, and up to any depth b) Nodes can change in probability mass, or new nodes can be created c) Node parameters can evolve over time. In this paper, we define a distribution over temporally varying trees with infinitely many nodes that captures this form of variation, and describe how this model can cluster both real-valued observations and text data. Further, we propose a scalable approximate inference scheme that can be run in parallel, and demonstrate its efficacy on synthetic data where ground-truth clustering is available, as well as demonstrate qualitative and quantitative performance on three text corpora. 2 Background The model proposed in this paper is a dependent nonparametric process with tree-structured marginals. A dependent nonparametric process [12] is a distribution over collections of random measures indexed by values in some covariate space, such that at each covariate value, the marginal distribution is given by some known nonparametric distribution. For example, a dependent Dirichlet process [12, 7, 11] is a distribution over collections of probability measures with Dirichlet processdistributed marginals; a dependent Pitman-Yor process [15] is a distribution over collections of probability measures with Pitman-Yor process-distributed marginals; a dependent Indian buffet process [17] is a distribution over collections of matrices with Indian buffet process-distributed marginals; etc. If our covariate space is time, such distributions can be used to construct nonparametric, time-varying models. There are two main methods of inducing dependency: Allowing the sizes of the atoms composing the measure to vary across covariate space, and allowing the parameter values associated with the atoms to vary across covariate space. In the context of a time-dependent topic model, these methods correspond to allowing the popularity of a topic to change over time, and allowing the words used to express a topic to change over time (topic drift). Our proposed model incorporates both forms of dependency. In the supplement, we discuss some specific dependent nonparametric models that share properties with our model. The key difference between our proposed model and existing dependent nonparametric models is that ours has tree-distributed marginals. There are a number of options for the marginal distribution over trees, as we discuss in the supplement. We choose a distribution over infinite-dimensional trees known as the tree-structured stick breaking process [TSSBP, 1], described in Section 2.1. 2.1 The tree-structured stick-breaking process The tree-structured stick-breaking process (TSSBP) is a distribution over trees with infinitely many leaves and infinitely many internal nodes. Each node ϵ within the tree is associated with a mass πϵ such that P ϵ πϵ = 1, and each data point is assigned to a node in the tree according to p(zn = ϵ) = πϵ, where zn is the node assignment of the nth data point. The TSSBP is unique among the current toolbox of random infinite-dimensional trees in that data can be assigned to an internal node, rather than a leaf, of the tree. This property is often desirable; for example in a topic modeling context, a document could be assigned to a general topic such as “science” that lives toward the root of the tree, or to a more specific topic such as “genetics” that is a descendant of the science topic. The TSSBP can be represented using two interleaving stick-breaking processes – one (parametrized by α) that determines the size of a node and another (parametrized by γ) that determines the branching probabilities. Index the root node as node ∅and let π∅be the mass assigned to it. Index its (countably infinite) child nodes as node 1, node 2, . . . and let π1, π2, . . . be the masses assigned to them; index the child nodes of node 1 as nodes 1 · 1, 1 · 2, . . . and let π1·1, π1·2, . . . be the masses assigned to nodes 1 · 1, 1 · 2 . . . ; etc. Then we can sample the infinite-dimensional tree as: νϵ ∼Beta(1, α(|ϵ|)), ψϵ ∼Beta(1, γ), π∅= ν∅, φ∅= 1 φϵ·i = ψϵ·i Qi−1 j=1(1 −ψϵ·j) πϵ = νϵφϵ Q ϵ′≺ϵ(1 −νϵ′)φϵ′, (1) 2 where |ϵ| indicates the depth of node ϵ, and ϵ′ ≺ϵ indicates that ϵ′ is an ancestor node of ϵ. We refer to the resulting infinite-dimensional weighted tree as Π = ((πϵ), (φϵi)). 3 Dependent tree-structured stick-breaking processes We now describe a dependent tree-structured stick-breaking process where both atom sizes and their locations vary with time. We first describe a distribution over atom sizes, and then use this distribution over collections of trees as the basis for time-varying clustering models and topic models. 3.1 A distribution over time-varying trees We start with the basic TSSBP model [1] (described in Section 2.1 and the left of Figure 1), and modify it so that the latent variables νϵ, ψϵ and πϵ are replaced with sequences ν(t) ϵ , ψ(t) ϵ and π(t) ϵ indexed by discrete time t ∈T (the middle of Figure 1). The forms of ν(t) ϵ and ψ(t) ϵ are chosen so that the marginal distribution over the π(t) ϵ is as described in Equation 1. Let N (t) be the number of observations at time t, and let z(t) n be the node allocation of the nth observation at time t. For each node ϵ at time t, let X(t) ϵ = PNt n=1 I(z(t) n = ϵ) be the number of observations assigned to node ϵ at time t, and Y (t) ϵ = PNt n=1 I(ϵ ≺z(t) n ) be the number of observations assigned to descendants of node ϵ. Introduce a “window” parameter h ∈N. We can then define a prior predictive distribution over the tree at time t, as ν(t) ϵ ∼Beta 1 + Pt−1 t′=t−h X(t′) ϵ , α(|ϵ|) + Pt−1 t′=t−h Y (t′) ϵ ψ(t) ϵ·i ∼Beta 1 + Pt−1 t′=t−h(X(t′) ϵ·i + Y (t′) ϵ·i ),γ + P j>i Pt t′=t−h(X(t′) ϵ·j + Y (t′) ϵ·j ) . (2) Following [1], we let α(d) = λdα0, for α0 > 0 and λ ∈(0, 1). This defines a sequence of trees (Π(t) = ((π(t) ϵ ), (φ(t) ϵi )), t ∈T ). Intuitively, the prior distribution over a tree at time t is given by the posterior distribution of the (stationary) TSSBP, conditioned on the observations in some window t −h, . . . , t −1. The following theorem gives the equivalence of dynamic TSSBP (dTSSBP) and TSSBP Theorem 1. The marginal posterior distribution of the dTSSBP, at time t, follows a TSSBP. The proof is a straightforward extension of that for the generalized P´olya urn dependent Dirichlet process [7] and is given in the supplimentary. The above theorem implies that Equation 2 defines a dependent tree-structured stick-breaking process. We note that an alternative choice for inducing dependency would be to down-weight the contribution of observations for previous time-steps. For example, we could exponentially decay the contributions of observations from previous time-steps, inducing a similar form of dependency as that found in the recurrent Chinese restaurant process [2]. However, unlike the method described in Equation 2, such an approach would not yield stationary TSSBP-distributed marginals. 3.2 Dependent hierarchical clustering The construction above gives a distribution over infinite-dimensional trees, which in turn have a probability distribution over their nodes. In order to use this distribution in a hierarchical Bayesian model for data, we must associate each node with a parameter value θ(t) ϵ . We let Θ(t) denote the set of all parameters θ(t) ϵ associated with a tree Π(t). We wish to capture two properties: 1) Within a tree Π(t), nodes have similar values to their parents; and 2) Between trees Π(t) and Π(t+1), corresponding parameters θ(t) ϵ and θ(t+1) ϵ have similar values. This form of variation is shown in the right of Figure 1. In this subsection, we present two models that exhibit these properties: One appropriate for real-valued data, and one appropriate for multinomial data. 3.2.1 A time-varying, tree-structured mixture of Gaussians An infinite mixture of Gaussians is a flexible choice for density estimation and clustering real-valued observations. Here, we suggest a time-varying hierarchical clustering model that is similar to the generalized Gaussian model of [1]. The model assumes Gaussian-distributed data at each node, and allows the means of clusters to evolve in an auto-regressive model, as below: θ(t) ∅|θ(t−1) ∅ ∼N(θ(t−1) ∅ , σ0σa 1I), θ(t) ϵ·i |θ(t) ϵ , θ(t−1) ϵ·i ∼N(m, s2I), (3) 3 where, s2 = 1 σ0σ|ϵ·i| 1 + 1 σ0σ|ϵ·i|+a 1 −1 , m = s2· θ(t) ϵ (σ0σ|ϵ·i| 1 )2 + ηθ(t−1) ϵ·i σ0σ|ϵ·i|+a 1 , σ0 > 0, σ1 ∈(0, 1), η ∈[0, 1), and a ≥1. Due to the self-conjugacy of the Gaussian distribution, this corresponds to a Markov network with factor potentials given by unnormalized Gaussian distributions: Up to a normalizing constant, the factor potential associated with the link between θ(t−1) ϵ and θ(t) ϵ is Gaussian with variance σ0σ|ϵ| 1 , and the factor potential associated with the link between θ(t) ϵ and θ(t) ϵ·i is Gaussian with variance σ0σ|ϵ·i|+a 1 . For a single time point, this allows for fractal-like behavior, where the distance between child and parent decreases down the tree. This behavior, which is not used in the generalized Gaussian model of [1], makes it easier to identify the root node, and guarantees that the marginal distribution over the location of the leaf nodes has finite variance. The a parameter enforces the idea that the amount of variation between θ(t) ϵ and θ(t+1) ϵ is smaller than that between θ(t) ϵ and θ(t) ϵ·i , while η ensures the variance of node parameters remains finite across time. We chose spherical Gaussian distributions to ensure that structural variation is captured by the tree rather than by node parameters. 3.3 A time-varying model for hierarchically clustering documents Given a dictionary of V words, a document can be represented using a V -dimensional term frequency vector, that corresponds to a location on the surface of the (V −1)-dimensional unit sphere. The von Mises-Fisher distribution, with mean direction µ and concentration parameter τ, provides a distribution on this space. A mixture of von Mises-Fisher distributions can, therefore, be used to cluster documents [3, 8]. Following the terminology of topic modeling [6], the mean direction µk associated with the kth cluster can be interpreted as the topic associated with that cluster. We construct a time-dependent hierarchical clustering model appropriate for documents by associating nodes of our dependent nonparametric tree with topics. Let x(t) n be the vector associated with the nth document at time t. We assign a mean parameter θ(t) ϵ to each node ϵ in each tree Π(t) as θ(t) ∅|θ(t−1) ∅ ∼vMF(τ (t) ∅, ρ(t) ∅), θ(t) ϵ·i |θ(t) ϵ , θ(t−1) ϵ·i ∼vMF(τ (t) ϵ·i , ρ(t) ϵ·i), (4) where, ρ(t) ∅ = κ0 q 1 + κ2a 1 + 2κa 1(θ(t) −1 · θ(t−1) ∅ ), τ (t) ∅ = κ0θ(t) −1+κ0κa 1θ(t−1) ∅ ρ(t) ∅ ρ(t) ϵ·i = κ0κ|ϵ·i| 1 q 1 + κ2a 1 + 2κa 1(θ(t) ϵ · θ(t−1) ϵ·i ), τ (t) ϵ·i = κ0κ|ϵ·i| 1 θ(t) ϵ +κ0κ|ϵ·i|+a 1 θ(t−1) ϵ·i ρ(t) ϵ·i , κ0 > 0, κ1 > 1, and θ(t) −1 is a probability vector of the same dimension as the θ(t) ϵ that can be interpreted as the parent of the root node at time t.1 This yields similar dependency behavior to that described in Section 3.2.1. Conditioned on Π(t) and Θ(t) = (θ(t) ϵ ), we sample each document x(t) n according to z(t) n ∼ Discrete(Π(t)) and xn ∼vMF(θ(t), β). This is a hierarchical extension of the temporal vMF mixture proposed by [8]. 4 Online Learning In many time-evolving applications, we observe data points in an online setting. We are typically interested in obtaining predictions for future data points, or characterizing the clustering structure of current data, rather than improving predictive performance on historic data. We therefore propose a sequential online learning algorithm, where at each time t we infer the parameter settings for the tree Π(t) conditioned on the previous trees, which we do not re-learn. This allows us to focus our computational efforts on the most recent (and likely relevant) data. This has the added advantage of reducing the computational demands of the algorithm, as we do not incorporate a backwards pass through the data, and are only ever considering a fraction of the data at a time. In developing an inference scheme, there is always a trade-off between estimate quality and computational requirements. MCMC samplers are often the “gold standard” of inference techniques, because they have the true posterior distribution as the stationary distribution of their Markov Chain. However, they can be very slow, particularly in complex models. Estimating the parameter setting that maximizes the data likelihood is a much cheaper, but cannot capture the full posterior. 1In our experiments, we set θ(t) −1 to be the average over all data points at time t. This ensures that the root node is close to the centroid of the data, rather than the periphery. 4 In order to develop an inference algorithm that is parallelizable, runs in reasonable time, but still obtains good predictive performance, we combine Gibbs sampling steps for learning the tree parameters (Π(t)) and the topic indicators (z(t) n ) with a MAP method for estimating the location parameters (θ(t) ϵ ). The resulting algorithm has the following desirable properties: 1. The priors for ν(t) ϵ , ψ(t) ϵ only depend on {z(0) n } . . . {z(t−1) n }, whose sufficient statistics {X(0) ϵ , Y (0) ϵ } . . . {X(t−1) ϵ , Y (t−1) ϵ } can be updated in amortized constant time. 2. The posteriors for ν(t) ϵ , ψ(t) ϵ are conditionally independent given {z(1) n } . . . {z(t) n }. Hence we can Gibbs sample ν(t) ϵ , ψ(t) ϵ in parallel given the cluster assignments {z(1) n } . . . {z(t) n } (or more precisely, their sufficient statistics {Xϵ, Yϵ}). Similarly, we can Gibbs sample the cluster/topic assignments {z(t) n } in parallel given the parameters {ν(t) ϵ , ψ(t) ϵ , θ(t) ϵ } and the data, as well as infer the MAP estimate of {θ(t) ϵ } in parallel given the data and the cluster/topic assignments. Because of the online assumption, we do not consider evidence from times u > t. Sampling ν(t) ϵ , ψ(t) ϵ Due to the conjugacy between the beta and binomial distributions, we can easily Gibbs sample the stick-breaking parameters ν(t) ϵ |Xϵ, Yϵ ∼Beta 1 + Pt t′=t−h X(t′) ϵ ,α(|ϵ|) + Pt t′=t−h Y (t′) ϵ ψ(t) ϵ·i |Xϵ·i, Yϵ·i ∼Beta 1 + Pt t′=t−h(X(t′) ϵ·i + Y (t′) ϵ·i ),γ + P j>i Pt t′=t−h(X(t′) ϵ·j + Y (t′) ϵ·j ) . The ν(t) ϵ , ψ(t) ϵ distributions for each node are conditionally independent given the counts X, Y , and so the sampler can be parallelized. We only explicitly store π(t) ϵ , φ(t) ϵ , θ(t) ϵ for nodes ϵ with nonzero counts, i.e. Pt t′=t−h X(t′) ϵ + Y (t′) ϵ > 0. Sampling z(t) n Conditioned on the ν(t) ϵ and ψ(t) ϵ , the distribution over the cluster assignments z(t) n is just given by the TSSBP. We therefore use the slice sampling method described in [1] to Gibbs sample z(t) n | {ν(t) ϵ }, {ψ(t) ϵ }, x(t) n , θ. Since the cluster assignments are conditionally independent given the tree, this step can be performed in parallel. Learning θ It is possible to Gibbs sample the cluster parameters θ; however, in the document clustering case described in Section 3.3, this requires far more time than sampling all other parameters. To improve the speed of our algorithm, we instead use maximum a posteriori (MAP) estimates for θ, obtained using a parallel coordinate ascent algorithm. Notably, conditioned on the trees at time t −1 and t + 1, the θ(t) ϵ for odd-numbered tree depths |ϵ| are conditionally independent given the θ(t) ϵ′ s at even-numbered tree depths |ϵ′|, and vice versa. Hence, our algorithm alternates between parallel optimization of odd-depth θ(t) ϵ , and parallel optimization of even-depth θ(t) ϵ . In general, the conditional distribution of a cluster parameter θ(t) ϵ depends on the values of its predecessor θ(t−1) ϵ , its postdecessor θ(t+1) ϵ , its parent at time t, and its children at time t. In some cases, not all of these values will be available – for example if a node was unoccupied at previous time steps. In this case, the distribution now depends on the full history of the parent node. For computational reasons, and because we do not wish to store the full history, we approximate the distribution as being dependent only on observed members of the node’s Markov blanket. 5 Experimental evaluation We evaluate the performance of our model on both synthetic and real-world data sets. Evaluation on synthetic data sets allows us to verify that our inference algorithm allows us to recover the “true” evolving hierarchical structure underlying our data. Evaluation on real-world data allows us to evaluate whether our modeling assumptions are useful in practice. 5.1 Synthetic data We manually created a time-evolving tree, as shown in Figure 2, with Gaussian-distributed data at each node. This synthetic time-evolving tree features temporal variation in node probabilities, temporal variation in node parameters, and addition and deletion of nodes. Using the Gaussian model described in Equation 3, we inferred the structure of the tree at each time period as described in Section 4. Figure 3 shows the recovered tree structure, demonstrating the ability of our inference algorithm to recover the expected evolving hierarchical structure. Note that it accurately captures evolution in node probabilities and location, and the addition and deletion of new nodes. 5 Figure 2: Ground truth tree, evolving over three time steps Figure 3: Recovered tree structure, over three consecutive time periods. Each color indicates a node in the tree and each arrow indicates a branch connecting parent to child; nodes are consistently colored across time. dTSSBP o-TSSBP T-TSSBP Depth limit 4 3 4 3 4 3 TWITTER 522 ± 4.35 249 ± 0.98 414 ± 3.31 199 ± 2.19 335 ± 54.8 182 ± 24.1 SOU 2708 ± 32.0 1320 ± 33.6 1455 ± 44.5 583 ± 16.4 1687 ± 329 1089 ± 143 PNAS 4562 ± 116 3217 ± 195 2672 ± 357 1163 ± 196 4333 ± 647 2962 ± 685 dDP o-DP T-DP TWITTER 204 ± 8.82 136 ± 0.42 112 ± 10.9 SOU 834 ± 51.2 633 ± 18.8 890 ± 70.5 PNAS 2374 ± 51.7 1061 ± 10.5 2174 ± 134 Table 1: Test set average log-likelihood on three datasets. 5.2 Real-world data In Section 3.3, we described how the dependent TSSBP can be combined with a von Mises-Fisher likelihood to cluster documents. To evaluate this model, we looked at three corpora: • TWITTER: 673,102 tweets containing hashtags relevant to the NFL, collected over 18 weeks in 2011 and containing 2,636 unique words (after stopwording). We grouped the tweets into 9 two-week epochs. • PNAS: 79,800 paper titles from the Proceedings of the National Academy of Sciences between 1915 and 2005, containing 36,901 unique words (after stopwording). We grouped the titles into 10 ten-year epochs. • STATE OF THE UNION (SOU): Presidential SoU addresses from 1790 through 2002, containing 56,352 sentences and 21,505 unique words (after stopwording). We grouped the sentences into 21 ten-year epochs. In each case, documents were represented using their vector of term frequencies. Our hypothesis is that the topical structure of language is hierarchically structured and timeevolving, and that a model that captures these properties will achieve better performance than models that ignore hierarchical structure and/or temporal evolution. To test these hypotheses, we compare our dependent tree-structured stick-breaking process (dTSSBP) against several online nonparametric models for document clustering: 1. Multiple tree-structured stick-breaking process (T-TSSBP): We modeled the entire corpus using the stationary TSSBP model, with each node modeled using an independent von Mises-Fisher distribution. Each time period is modeled with a separate tree, using a similar implementation to our time-dependent TSSBP. 2. “Online” tree-structured stick-breaking processes (o-TSSBP): This simulates online learning of a single, stationary tree over the entire corpus. We used our dTSSBP implementation with an infinite window h = ∞, and once a node is created at time t, we prevent its vMF mean θ(t) ϵ from changing in future time points. 3. Dependent Dirichlet process (dDP): We modeled the entire corpus using an h-order Markov generalized P´olya urn DDP [7]. This model was implemented by modifying our dTSSBP code to have a single level. Node parameters were evolved as θ(t) k ∼vMF(θ(t) k , ξ). 4. Multiple Dirichlet process (T-DP): We modeled the entire corpus using DP mixtures of von Mises-Fisher distributions, one DP per time period. Each node was modeled using an independent von Mises-Fisher distribution. We used our own implementation. 6 9 mobilities, ions, air, electrons, presence, resistance, function, electric, molecules, disease 36 pressure, ions, solutions, salts, osmotic, molecules, mobilities, gas, effect, influence 3 pressure, acoustic, exhibit, excitation, telephonic, variation, heat, specific, liquids, chiefly Chemistry 1915 - 1924 19 electrons, mobilities, ions, air, presence, metals, electric, resistance, function, conductivity 3 pressure, ions, solutions, salts, osmotic, molecules, mobilities, gas, effect, influence 9 pressure, acoustic, liquids, telephonic, exhibit, excitation, variation, heat, specific, reservoirs 3 solutions, liquids, non, salts, fields, electrolytes, dielectric, fused, squares, intensive Chemistry 1925 - 1934 0 3 pressure, acoustic, liquids, telephonic, exhibit, excitation, variation heat, specific, reservoirs 24 solutions, equations, finite, field, liquids, salts, non, electrolytes, conductance, certain Chemistry 1945 - 1954 0 11 pressure, acoustic, liquids, telephonic, exhibit, excitation, variation, heat, specific, reservoirs 11 solutions, equations, finite, field, liquids, non, salts, electrolytes, conductance, certain Chemistry 1965 - 1974 … … 30 virus, murine, leukemia, cells, sarcoma, antibody, herpes, induced, simian, type Immunology 1965 - 1974 209 virus, simian, rna, cells, vesicular, stomatitis, influenza, sequence, antigen, viral 97 virus, leukemia, murine, sarcoma, cells, induced, mice, herpes, antigens, simplex 93 virus, sarcoma, avian, gene, transforming, genome, protein, sequences, murine, myeloblastosis 63 virus, cells, epstein, barr, murine, antibody, sarcoma, leukemia, vitro, antibodies Immunology 1975 - 1984 133 virus, simian, rna, cells, vesicular, stomatitis, influenza, sequence, antigen, viral 97 virus, leukemia, murine, sarcoma, cells, induced, mice, herpes, antigens, simplex 65 virus, cells, epstein, barr, murine, antibody, sarcoma, leukemia, vitro, antibodies Immunology 1985 - 1994 Figure 4: PNAS dataset: Birth, growth, and death of tree-structured topics in our dTSSBP model. This illustration captures some trends in American scientific research throughout the 20th century, by focusing on the evolution of parent and child topics in two major scientific areas: Chemistry and Immunology (the rest of the tree has been omitted for clarity). At each epoch, we show the number of documents assigned to each topic, as well as it’s most popular words (according to the vMF mean θ). 5. “Online” Dirichlet process (o-DP): This simulates online learning of a single DP over the entire corpus. We used our dDP implementation with an infinite window h = ∞, and once a cluster is instantiated at time t, we prevent its vMF mean θ(t) from changing in future time points. Evaluation scheme: We divide each dataset into two parts: the first 50%, and last 50% of time points. We use the first 50% to tune model parameters and select a good random restart (by training on 90% and testing on 10% of the data at each time point), and then use the last 50% to evaluate the performance of the best parameters/restart (again, by training on 90% and testing on 10% data). When training the 3 TSSBP-based models, we grid-searched κ0 ∈{1, 10, 100, 1000, 10000}, and fixed κ1 = 1, a = 0 for simplicity. Each value of κ0 was run 5 times to get different random restarts, and we took the best κ0-restart pair for evaluation on the last 50% of time points. For the 3 DP-based models, there is no κ0 parameter, so we simply took 5 random restarts and used the best one for evaluation. For all TSSBP- and DP-based models, we repeated the evaluation phase 5 times to get error bars. Every dTSSBP trial completed in < 20 minutes on a single processor core, while we observed moderate (though not perfectly linear) speedups with 2-4 processors. Parameter settings: For all models, we estimated each node/cluster’s vMF concentration parameter β from the data. For the TSSBP-based models, we used stick breaking parameters γ = 0.5 and α(d) = 0.5d, and set θ(t) −1 to the average document term frequency vector at time t. In order to keep running times reasonable, we limit the TSSBP-based models to a maximum depth of either 3 or 4 (we report results for both)2. For the DP-based models, we used a Dirichlet process concentration parameter of 1. The dDP’s inter-epoch vMF concentration parameter was set to ξ = 0.001. Results: Table 1 shows the average log (unnormalized) likelihoods on the test sets (from the last 50% of time points). The tree-based models uniformly out-perform the non-hierarchical models, while the max-depth-4 tree models outperform the max-depth-3 ones. On all 3 datasets, the maxdepth-4 dTSSBP uniformly outperforms all models, confirming our initial hypothesis. 5.3 Qualitative results In addition to high-quality quantitative results, we find that the time-dependent tree model gives good qualitative performance. Figure 4 shows two time-evolving sub-trees obtained from the PNAS data set. The top level shows a sub-tree concerned with Chemistry; the bottom level shows a sub-tree 2One justification is that shallow hierarchies are easier to interpret than deep ones; see [5, 9]. 7 40 world, peace, free, nation, nations, america, war, dream, american, communist Cold War 1960 - 1970 10 world, security, strength, relations, peace, people, fourth, nations, nuclear, continue 144 world, peace, free, nation, nations, america, war, dream, american, communist 6 world, major, peace, asia, force, exist, security, america, natural, nation Cold War 1970 - 1980 3 world, major, peace, asia, force, exist, security, america, natural, nation 3 world, power, defenses, years, leadership, restore, alliances, trusts, peace, requires Cold War 1980 - 1990 87 world, peace, free, nation, nations, america, war, dream, american, communist 3 world, security, strength, relations, peace, people, fourth, nations, nuclear, continue Cold War 1990 - 2000 10 world, peace, free, nation, nations, america, war, dream, american, communist 5 world, security, strength, relations, peace, people, fourth, nations, nuclear, continue 19 general, army, command, war, proper, summer, secretary, operations, time, mexico Mexican War 1840 - 1850 10 slavery, constitution, senate, van, buren, war, existed, rebellion, time, act Civil War 1860 - 1870 Indian Wars 1790 - 1800 1 indian, tribes, overtures, friendship, spared, source, lands, commissioners, extinguished, title Indian Wars 1800 - 1810 11 indian, tribes, friendship, overtures, spared, lands, source, demarcation, practicable, imposition Indian Wars 1810 - 1820 Indian Wars 1830 - 1840 1 indian, tribes, overtures, friendship, spared, source, lands, commissioners, title, demarcation 2 indian, tribes, overtures, friendship, spared, source, lands, imposition, war, mode … 6 indian, tribes, friendship, overtures, spared, lands, source, demarcation, practicable, imposition 5 indian, tribes, friendship, overtures, spared, lands, source, demarcation, practicable, imposition Figure 5: State of the Union dataset: Birth, growth, and death of tree-structured topics in our dTSSBP model. This illustration captures some key events in American history. At each epoch, we show the number of documents assigned to each topic, as well as it’s most popular words (according to the vMF mean θ). concerned with Immunology. Our dynamic tree model discovers closely-related topics and groups them under a sub-tree, and creates, grows and destroys individual sub-topics as needed to fit the data. For instance, our model captures the sudden surge in Immunology-related research from 1975-1984, which happened right after the structure of the antibody molecule was identified a few years prior. In the Chemistry topic, the study of mechanical properties of materials (pressure, acoustic properties, specific heat, etc) is a constant presence throughout the century. The study of electrical properties of materials starts off with a topic (in purple) that seems devoted to Physical Chemistry. However, following the development of Quantum Mechanics in the 30s, this line of research became more closely aligned with Physics than Chemistry, and it disappears from the sub-tree. In its wake, we see the growth of a topic more concerned with electrolytes, solutions and salts, which remained the within the sphere of Chemistry. Figure 5 shows time-evolving sub-trees obtained from the State of the Union dataset. We see a sub-tree tracking the development of the Cold War. The parent node contains general terms relevant to the Cold War; starting from the 1970s, a child node (shown in purple) contains terms relevant to nuclear arms control, in light of the Strategic Arms Limitation Talks of that decade. The same decade also sees the birth of a child node focused on Asia (shown in cyan), contemporaneous with President Richard Nixon’s historic visit to China in 1972. In addition to the Cold War, we also see topics corresponding to events such as the Mexican War, the Civil War and the Indian Wars, demonstrating our model’s ability to detect events in a timeline. 6 Discussion In this paper, we have proposed a flexible nonparametric model for dynamically-evolving, hierarchically structured data. This model can be applied to multiple types of data using appropriate choices of likelihood; we present an application in document clustering that combines high-quality quantitative performance with intuitively interpretable results. One of the significant challenges in constructing nonparametric dependent tree models is the need for efficient inference algorithms. We make judicious use of approximations and combine MCMC and MAP approximation techniques to develop an inference algorithm that can be applied in an online setting, while being parallelizable. Acknowledgements: This research was supported by NSF Big data IIS1447676, DARPA XDATA FA87501220324 and NIH GWAS R01GM087694. 8 References [1] R. Adams, Z. Ghahramani, and M. Jordan. Tree-structured stick breaking for hierarchical data. In Advances in Neural Information Processing Systems, 2010. [2] A. Ahmed and E. Xing. Dynamic non-parametric mixture models and the recurrent Chinese restaurant process: with applications to evolutionary clustering. In SDM, 2008. [3] A. Banerjee, I. Dhillon, J. Ghosh, and S. Sra. Clustering on the unit hypersphere using von Mises-Fisher distributions. Journal of Machine Learning Research, 6:1345–1382, 1995. [4] D. Blei and P. Frazier. Distance dependent Chinese restaurant processes. Journal of Machine Learning Research, 12(2461–2488), 2011. [5] D. Blei, T. Griffiths, M. Jordan, and J. Tenenbaum. Hierarchical topic models and the nested Chinese restaurant process. In Advances in Neural Information Processing Systems, 2004. [6] D. Blei, A. Ng, and M. Jordan. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993–1022, 2003. [7] F. Caron, M. Davy, and A. Doucet. Generalized Polya urn for time-varying Dirichlet processes. In uai, 2007. [8] S. Gopal and Y. Yang. Von Mises-Fisher clustering models. In International Conference on Machine Learning, 2014. [9] Q. Ho, J. Eisenstein, and E. Xing. Document hierarchies from text and links. In Proceedings of the 21st international conference on World Wide Web, pages 739–748. ACM, 2012. [10] J. Kingman. On the genealogy of large populations. Journal of Applied Probability, 19:27–43, 1982. [11] D. Lin, E. Grimson, and J. Fisher. Construction of dependent Dirichlet processes based on Poisson processes. In Advances in Neural Information Processing Systems, 2010. [12] S. N. MacEachern. Dependent nonparametric processes. In Bayesian Statistical Science, 1999. [13] R. M. Neal. Density modeling and clustering using Dirichlet diffusion trees. Bayesian Statistics, 7:619–629, 2003. [14] A. Rodriguez, D. Dunson, and A. Gelfand. The nested Dirichlet process. Journal of the American Statistical Association, 103(483), 2008. [15] E. Sudderth and M. Jordan. Shared segmentation of natural scenes using dependent PitmanYor processes. In Advances in Neural Information Processing Systems, 2008. [16] X. Wang and A. McCallum. Topics over time: a non-Markov continuous-time model of topical trends. In Knowledge Discovery and Data Mining, 2006. [17] S. Williamson, P. Orbanz, and Z. Ghahramani. Dependent Indian buffet processes. In Artificial Intelligence and Statistics, 2010. 9
|
2014
|
140
|
5,226
|
A Statistical Decision-Theoretic Framework for Social Choice Hossein Azari Soufiani∗ David C. Parkes † Lirong Xia‡ Abstract In this paper, we take a statistical decision-theoretic viewpoint on social choice, putting a focus on the decision to be made on behalf of a system of agents. In our framework, we are given a statistical ranking model, a decision space, and a loss function defined on (parameter, decision) pairs, and formulate social choice mechanisms as decision rules that minimize expected loss. This suggests a general framework for the design and analysis of new social choice mechanisms. We compare Bayesian estimators, which minimize Bayesian expected loss, for the Mallows model and the Condorcet model respectively, and the Kemeny rule. We consider various normative properties, in addition to computational complexity and asymptotic behavior. In particular, we show that the Bayesian estimator for the Condorcet model satisfies some desired properties such as anonymity, neutrality, and monotonicity, can be computed in polynomial time, and is asymptotically different from the other two rules when the data are generated from the Condorcet model for some ground truth parameter. 1 Introduction Social choice studies the design and evaluation of voting rules (or rank aggregation rules). There have been two main perspectives: reach a compromise among subjective preferences of agents, or make an objectively correct decision. The former has been extensively studied in classical social choice in the context of political elections, while the latter is relatively less developed, even though it can be dated back to the Condorcet Jury Theorem in the 18th century [9]. In many multi-agent and social choice scenarios the main consideration is to achieve the second objective, and make an objectively correct decision. Meanwhile, we also want to respect agents’ preferences and opinions, and require the voting rule to satisfy well-established normative properties in social choice. For example, when a group of friends vote to choose a restaurant for dinner, perhaps the most important goal is to find an objectively good restaurant, but it is also important to use a good voting rule in the social choice sense. Even for applications with less societal context, e.g. using voting rules to aggregate rankings in meta-search engines [12], recommender systems [15], crowdsourcing [23], semantic webs [27], some social choice normative properties are still desired. For example, monotonicity may be desired, which requires that raising the position of an alternative in any vote does not hurt the alternative in the outcome of the voting rule. In addition, we require voting rules to be efficiently computable. Such scenarios propose the following new challenge: How can we design new voting rules with good statistical properties as well as social choice normative properties? To tackle this challenge, we develop a general framework that adopts statistical decision theory [3]. Our approach couples a statistical ranking model with an explicit decision space and loss function. ∗azari@google.com, Google Research, New York, NY 10011, USA. The work was done when the author was at Harvard University. †parkes@eecs.harvard.edu, Harvard University, Cambridge, MA 02138, USA. ‡xial@cs.rpi.edu, Rensselaer Polytechnic Institute, Troy, NY 12180, USA. 1 Anonymity, neutrality Monotonicity Majority, Condorcet Consistency Complexity Min. Bayesian risk Kemeny Y Y N NP-hard, PNP || -hard N Bayesian est. of M1 ϕ (uni. prior) Y N N NP-hard, PNP || -hard (Theorem 3) Y Bayesian est. of M2 ϕ (uni. prior) Y N N P (Theorem 4) Y Table 1: Kemeny for winners vs. Bayesian estimators of M1 ϕ and M2 ϕ to choose winners. Given these, we can adopt Bayesian estimators as social choice mechanisms, which make decisions to minimize the expected loss w.r.t. the posterior distribution on the parameters (called the Bayesian risk). This provides a principled methodology for the design and analysis of new voting rules. To show the viability of the framework, we focus on selecting multiple alternatives (the alternatives that can be thought of as being “tied” for the first place) under a natural extension of the 0-1 loss function for two models: let M1 ϕ denote the Mallows model with fixed dispersion [22], and let M2 ϕ denote the Condorcet model proposed by Condorcet in the 18th century [9, 34]. In both models the dispersion parameter, denoted ϕ, is taken as a fixed parameter. The difference is that in the Mallows model the parameter space is composed of all linear orders over alternatives, while in the Condorcet model the parameter space is composed of all possibly cyclic rankings over alternatives (irreflexive, antisymmetric, and total binary relations). M2 ϕ is a natural model that captures real-world scenarios where the ground truth may contain cycles, or agents’ preferences are cyclic, but they have to report a linear order due to the protocol. More importantly, as we will show later, a Bayesian estimator on M2 ϕ is superior from a computational viewpoint. Through this approach, we obtain two voting rules as Bayesian estimators and then evaluate them with respect to various normative properties, including anonymity, neutrality, monotonicity, the majority criterion, the Condorcet criterion and consistency. Both rules satisfy anonymity, neutrality, and monotonicity, but fail the majority criterion, Condorcet criterion,1 and consistency. Admittedly, the two rules do not enjoy outstanding normative properties, but they are not bad either. We also investigate the computational complexity of the two rules. Strikingly, despite the similarity of the two models, the Bayesian estimator for M2 ϕ can be computed in polynomial time, while computing the Bayesian estimator for M1 ϕ is PNP || -hard, which means that it is at least NP-hard. Our results are summarized in Table 1. We also compare the asymptotic outcomes of the two rules with the Kemeny rule for winners, which is a natural extension of the maximum likelihood estimator of M1 ϕ proposed by Fishburn [14]. It turns out that when n votes are generated under M1 ϕ, all three rules select the same winner asymptotically almost surely (a.a.s.) as n →∞. When the votes are generated according to M2 ϕ, the rule for M1 ϕ still selects the same winner as Kemeny a.a.s.; however, for some parameters, the winner selected by the rule for M2 ϕ is different with non-negligible probability. These are confirmed by experiments on synthetic datasets. Related work. Along the second perspective in social choice (to make an objectively correct decision), in addition to Condorcet’s statistical approach to social choice [9, 34], most previous work in economics, political science, and statistics focused on extending the theorem to heterogeneous, correlated, or strategic agents for two alternatives, see [25, 1] among many others. Recent work in computer science views agents’ votes as i.i.d. samples from a statistical model, and computes the MLE to estimate the parameters that maximize the likelihood [10, 11, 33, 32, 2, 29, 7]. A limitation of these approaches is that they estimate the parameters of the model, but may not directly inform the right decision to make in the multi-agent context. The main approach has been to return the modal rank order implied by the estimated parameters, or the alternative with the highest, predicted marginal probability of being ranked in the top position. There have also been some proposals to go beyond MLE in social choice. In fact, Young [34] proposed to select a winning alternative that is “most likely to be the best (i.e., top-ranked in the true ranking)” and provided formulas to compute it for three alternatives. This idea has been formalized and extended by Procaccia et al. [29] to choose a given number of alternatives with highest marginal 1The new voting rule for M1 ϕ fails them for all ϕ < 1/ √ 2. 2 probability under the Mallows model. More recently, independent to our work, Elkind and Shah [13] investigated a similar question for choosing multiple winners under the Condorcet model. We will see that these are special cases of our proposed framework in Example 2. Pivato [26] conducted a similar study to Conitzer and Sandholm [10], examining voting rules that can be interpreted as expect-utility maximizers. We are not aware of previous work that frames the problem of social choice from the viewpoint of statistical decision theory, which is our main conceptual contribution. Technically, the approach taken in this paper advocates a general paradigm of “design by statistics, evaluation by social choice and computer science”. We are not aware of a previous work following this paradigm to design and evaluate new rules. Moreover, the normative properties for the two voting rules investigated in this paper are novel, even though these rules are not really novel. Our result on the computational complexity of the first rule strengthens the NP-hardness result by Procaccia et al. [29], and the complexity for the second rule (Theorem 5) was independently discovered by Elkind and Shah [13]. The statistical decision-theoretic framework is quite general, allowing considerations such as estimators that minimize the maximum expected loss, or the maximum expected regret [3]. In a different context, focused on uncertainty about the availability of alternatives, Lu and Boutilier [20] adopt a decision-theoretic view of the design of an optimal voting rule. Caragiannis et al. [8] studied the robustness of social choice mechanisms w.r.t. model uncertainty, and characterized a unique social choice mechanism that is consistent w.r.t. a large class of ranking models. A number of recent papers in computational social choice take utilitarian and decision-theoretical approaches towards social choice [28, 6, 4, 5]. Most of them evaluate the joint decision w.r.t. agents’ subjective preferences, for example the sum of agents’ subjective utilities (i.e. the social welfare). We don’t view this as fitting into the classical approach to statistical decision theory as formulated by Wald [30]. In our framework, the joint decision is evaluated objectively w.r.t. the ground truth in the statistical model. Several papers in machine learning developed algorithms to compute MLE or Bayesian estimators for popular ranking models [18, 19, 21], but without considering the normative properties of the estimators. 2 Preliminaries In social choice, we have a set of m alternatives C = {c1, . . . , cm} and a set of n agents. Let L(C) denote the set of all linear orders over C. For any alternative c, let Lc(C) denote the set of linear orders over C where c is ranked at the top. Agent j uses a linear order Vj ∈L(C) to represent her preferences, called her vote. The collection of agents votes is called a profile, denoted by P = {V1, . . . , Vn}. A (irresolute) voting rule r : L(C)n →(2C \ ∅) selects a set of winners that are “tied” for the first place for every profile of n votes. For any pair of linear orders V, W, let Kendall(V, W) denote the Kendall-tau distance between V and W, that is, the number of different pairwise comparisons in V and W. The Kemeny rule (a.k.a. Kemeny-Young method) [17, 35] selects all linear orders with the minimum Kendall-tau distance from the preference profile P, that is, Kemeny(P) = arg minW Kendall(P, W). The most well-known variant of Kemeny to select winning alternatives, denoted by KemenyC, is due to Fishburn [14], who defined it as a voting rule that selects all alternatives that are ranked in the top position of some winning linear orders under the Kemeny rule. That is, KemenyC(P) = {top(V ) : V ∈Kemeny(P)}, where top(V ) is the top-ranked alternative in V . Voting rules are often evaluated by the following normative properties. An irresolute rule r satisfies: • anonymity, if r is insensitive to permutations over agents; • neutrality, if r is insensitive to permutations over alternatives; • monotonicity, if for any P, c ∈r(P), and any P ′ that is obtained from P by only raising the positions of c in one or multiple votes, then c ∈r(P ′); • Condorcet criterion, if for any profile P where a Condorcet winner exists, it must be the unique winner. A Condorcet winner is the alternative that beats every other alternative in pair-wise elections. • majority criterion, if for any profile P where an alternative c is ranked in the top positions for more than half of the votes, then r(P) = {c}. If r satisfies Condorcet criterion then it also satisfies the majority criterion. • consistency, if for any pair of profiles P1, P2 with r(P1)∩r(P2) ̸= ∅, r(P1∪P2) = r(P1)∩r(P2). 3 For any profile P, its weighted majority graph (WMG), denoted by WMG(P), is a weighted directed graph whose vertices are C, and there is an edge between any pair of alternatives (a, b) with weight wP (a, b) = #{V ∈P : a ≻V b} −#{V ∈P : b ≻V a}. A parametric model M = (Θ, S, Pr) is composed of three parts: a parameter space Θ, a sample space S composing of all datasets, and a set of probability distributions over S indexed by elements of Θ: for each θ ∈Θ, the distribution indexed by θ is denoted by Pr(·|θ).2 Given a parametric model M, a maximum likelihood estimator (MLE) is a function fMLE : S →Θ such that for any data P ∈S, fMLE(P) is a parameter that maximizes the likelihood of the data. That is, fMLE(P) ∈arg maxθ∈Θ Pr(P|θ). In this paper we focus on parametric ranking models. Given C, a parametric ranking model MC = (Θ, Pr) is composed of a parameter space Θ and a distribution Pr(·|θ) over L(C) for each θ ∈ Θ, such that for any number of voters n, the sample space is Sn = L(C)n, where each vote is generated i.i.d. from Pr(·|θ). Hence, for any profile P ∈Sn and any θ ∈Θ, we have Pr(P|θ) = Q V ∈P Pr(V |θ). We omit the sample space because it is determined by C and n. Definition 1 In the Mallows model [22], a parameter is composed of a linear order W ∈L(C) and a dispersion parameter ϕ with 0 < ϕ < 1. For any profile P and θ = (W, ϕ), Pr(P|θ) = Q V ∈P 1 Z ϕKendall(V,W ), where Z is the normalization factor with Z = P V ∈L(C) ϕKendall(V,W ). Statistical decision theory [30, 3] studies scenarios where the decision maker must make a decision d ∈D based on the data P generated from a parametric model, generally M = (Θ, S, Pr). The quality of the decision is evaluated by a loss function L : Θ×D →R, which takes the true parameter and the decision as inputs. In this paper, we focus on the Bayesian principle of statistical decision theory to design social choice mechanisms as choice functions that minimize the Bayesian risk under a prior distribution over Θ. More precisely, the Bayesian risk, RB(P, d), is the expected loss of the decision d when the parameter is generated according to the posterior distribution given data P. That is, RB(P, d) = Eθ|P L(θ, d). Given a parametric model M, a loss function L, and a prior distribution over Θ, a (deterministic) Bayesian estimator fB is a decision rule that makes a deterministic decision in D to minimize the Bayesian risk, that is, for any P ∈S, fB(P) ∈arg mind RB(P, d). We focus on deterministic estimators in this work and leave randomized estimators for future research. Example 1 When Θ is discrete, an MLE of a parametric model M is a Bayesian estimator of the statistical decision problem (M, D = Θ, L0-1) under the uniform prior distribution, where L0-1 is the 0-1 loss function such that L0-1(θ, d) = 0 if θ = d, otherwise L0-1(θ, d) = 1. In this sense, all previous MLE approaches in social choice can be viewed as the Bayesian estimators of a statistical decision-theoretic framework for social choice where D = Θ, a 0-1 loss function, and the uniform prior. 3 Our Framework Our framework is quite general and flexible because we can choose any parametric ranking model, any decision space, any loss function, and any prior to use the Bayesian estimators social choice mechanisms. Common choices of both Θ and D are L(C), C, and (2C \ ∅). Definition 2 A statistical decision-theoretic framework for social choice is a tuple F = (MC, D, L), where C is the set of alternatives, MC = (Θ, Pr) is a parametric ranking model, D is the decision space, and L : Θ × D →R is a loss function. Let B(C) denote the set of all irreflexive, antisymmetric, and total binary relations over C. For any c ∈C, let Bc(C) denote the relations in B(C) where c ≻a for all a ∈C −{c}. It follows that L(C) ⊆B(C), and moreover, the Kendall-tau distance can be defined to count the number of pairwise disagreements between elements of B(C). In the rest of the paper, we focus on the following two parametric ranking models, where the dispersion is a fixed parameter. 2This notation should not be taken to mean a conditional distribution over S unless we are taking a Bayesian point of view. 4 Definition 3 (Mallows model with fixed dispersion, and the Condorcet model) Let M1 ϕ denote the Mallows model with fixed dispersion, where the parameter space is Θ = L(C) and given any W ∈Θ, Pr(·|W) is Pr(·|(W, ϕ)) in the Mallows model, where ϕ is fixed. In the Condorcet model, M2 ϕ, the parameter space is Θ = B(C). For any W ∈Θ and any profile P, we have Pr(P|W) = Q V ∈P 1 Z ϕKendall(V,W ) , where Z is the normalization factor such that Z = P V ∈B(C) ϕKendall(V,W ), and parameter ϕ is fixed.3 M1 ϕ and M2 ϕ degenerate to the Condorcet model for two alternatives [9]. The Kemeny rule that selects a linear order is an MLE of M1 ϕ for any ϕ. We now formally define two statistical decision-theoretic frameworks associated with M1 ϕ and M2 ϕ, which are the focus of the rest of our paper. Definition 4 For Θ = L(C) or B(C), any θ ∈Θ, and any c ∈C, we define a loss function Ltop(θ, c) such that Ltop(θ, c) = 0 if for all b ∈C, c ≻b in θ; otherwise Ltop(θ, c) = 1. Let F1 ϕ = (M1 ϕ, 2C \ ∅, Ltop) and F2 ϕ = (M2 ϕ, 2C \ ∅, Ltop), where for any C ⊆C, Ltop(θ, C) = P c∈C Ltop(θ, c)/|C|. Let f 1 B (respectively, f 2 B) denote the Bayesian estimators of F1 ϕ (respectively, F2 ϕ) under the uniform prior. We note that Ltop in the above definition takes a parameter and a decision in 2C \ ∅as inputs, which makes it different from the 0-1 loss function L0-1 that takes a pair of parameters as inputs, as the one in Example 1. Hence, f 1 B and f 2 B are not the MLEs of their respective models, as was the case in Example 1. We focus on voting rules obtained by our framework with Ltop. Certainly our framework is not limited to this loss function. Example 2 Bayesian estimators f 1 B and f 2 B coincide with Young [34]’s idea of selecting the alternative that is “most likely to be the best (i.e., top-ranked in the true ranking)”, under F1 ϕ and F2 ϕ respectively. This gives a theoretical justification of Young’s idea and other followups under our framework. Specifically, f 1 B is similar to rule studied by Procaccia et al. [29] and f 2 B was independently studied by Elkind and Shah [13]. 4 Normative Properties of Bayesian Estimators All omitted proofs can be found in the full version on arXiv. Theorem 1 For any ϕ, f 1 B satisfies anonymity, neutrality, and monotonicity. f 1 B does not satisfy majority or the Condorcet criterion for any ϕ < 1 √ 2,4 and it does not satisfy consistency. Proof sketch: Anonymity and neutrality are obviously satisfied. Monotonicity. Monotonicity follows from the following lemma. Lemma 1 For any c ∈C, let P ′ denote a profile obtained from P by raising the position of c in one vote. For any W ∈Lc(C), Pr(P ′|W) = Pr(P|W)/ϕ; for any b ∈C and any V ∈Lb(C), Pr(P ′|V ) ≤Pr(P|V )/ϕ. Majority and the Condorcet criterion. Let C = {c, b, c3, . . . , cm}. We construct a profile P ∗ where c is ranked in the top positions for more than half of the votes, but c ̸∈f 1 B(P ∗). For any k, let P ∗denote a profile composed of k copies of [c ≻b ≻c3 ≻· · · ≻cm], 1 of [c ≻b ≻cm ≻· · · ≻c3] and k −1 copies of [b ≻cm ≻· · · ≻c3 ≻c]. It is not hard to verify that the WMG of P ∗is as in Figure 1 (a). Then, we prove that for any ϕ < 1 √ 2, we can find m and k so that P V ∈Lc(C) Pr(P |V ) P W ∈Lb(C) Pr(P |W ) = 1+ϕ2k+···+ϕ2k(m−2) 1+ϕ2+···+ϕ2(m−2) · ϕ2 < 1. It follows that c is the Condorcet winner in P ∗but it does not minimize the Bayesian risk under M1 ϕ, which means that it is not the winner under f 1 B. 3In the Condorcet model the sample space is B(C)n [31]. We study a variant with sample space L(C)n. 4Characterizing majority and Condorcet criterion of f 1 B for ϕ ≥ 1 √ 2 is an open question. 5 c b c3
cm
c4
… 2k
2
2
2
2
2k
2k
c b c3
c4
4k
2k
2k
c b c3
c4
4k
2k
2k
c
a b
4
6
WMG of 6P
6
6
2
6
6
(a) The WMG of P ∗. (b) The WMGs of P1 (left) and P2 (right). (c) The WMG of P ′ (Thm. 3). Figure 1: WMGs of the profiles for proofs: (a) for majority and Condorcet (Thm. 1); (b) for consistency (Thm. 1); (c) for computational complexity (Thm. 3). Consistency. We construct an example to show that f 1 B does not satisfy consistency. In our construction m and n are even, and C = {c, b, c3, c4}. Let P1 and P2 denote profiles whose WMGs are as shown in Figure 1 (b), respectively. We have the following lemma. Lemma 2 Let P ∈{P1, P2}, P V ∈Lc(C) Pr(P |V ) P W ∈Lb(C) Pr(P |W ) = 3(1+ϕ4k) 2(1+ϕ2k+ϕ4k). For any 0 < ϕ < 1, 3(1+ϕ4k) 2(1+ϕ2k+ϕ4k) > 1 for all k. It is not hard to verify that f 1 B(P1) = f 1 B(P2) = {c} and f 1 B(P1 ∪P2) = {c, b}, which means that f 1 B is not consistent. □ Similarly, we can prove the following theorem for f 2 B. Theorem 2 For any ϕ, f 2 B satisfies anonymity, neutrality, and monotonicity. It does not satisfy majority, the Condorcet criterion, or consistency. By Theorem 1 and 2, f 1 B and f 2 B do not satisfy as many desired normative properties as the Kemeny rule for winners. On the other hand, they minimize Bayesian risk under F1 ϕ and F2 ϕ, respectively, for which Kemeny does neither. In addition, neither f 1 B nor f 2 B satisfy consistency, which means that they are not positional scoring rules. 5 Computational Complexity We consider the following two types of decision problems. Definition 5 In the BETTER BAYESIAN DECISION problem for a statistical decision-theoretic framework (MC, D, L) under a prior distribution, we are given d1, d2 ∈D, and a profile P. We are asked whether RB(P, d1) ≤RB(P, d2). We are also interested in checking whether a given alternative is the optimal decision. Definition 6 In the OPTIMAL BAYESIAN DECISION problem for a statistical decision-theoretic framework (MC, D, L) under a prior distribution, we are given d ∈D and a profile P. We are asked whether d minimizes the Bayesian risk RB(P, ·). PNP || is the class of decision problems that can be computed by a P oracle machine with polynomial number of parallel calls to an NP oracle. A decision problem A is PNP || -hard, if for any PNP || problem B, there exists a polynomial-time many-one reduction from B to A. It is known that PNP || -hard problems are NP-hard. Theorem 3 For any ϕ, BETTER BAYESIAN DECISION and OPTIMAL BAYESIAN DECISION for F1 ϕ under uniform prior are PNP || -hard. Proof: The hardness of both problems is proved by a unified reduction from the KEMENY WINNER problem, which is PNP || -complete [16]. In a KEMENY WINNER problem, we are given a profile P and an alternative c, and we are asked if c is ranked in the top of at least one V ∈L(C) that minimizes Kendall(P, V ). For any alternative c, the Kemeny score of c under M1 ϕ is the smallest distance between the profile P and any linear order where c is ranked in the top. We next prove that when ϕ < 1 m!, the Bayesian risk of c is largely determined by the Kemeny score of c. Lemma 3 For any ϕ < 1 m!, any c, b ∈C, and any profile P, if the Kemeny score of c is strictly smaller than the Kemeny score of b in P, then RB(P, c) < RB(P, b) for M1 ϕ. 6 Let t be any natural number such that ϕt < 1 m!. For any KEMENY WINNER instance (P, c) for alternatives C′, we add two more alternatives {a, b} and define a profile P ′ whose WMG is as shown in Figure 3(c) using McGarvey’s trick [24]. The WMG of P ′ contains the WMG(P) as a subgraph, where the weights are 6 times the weights in WMG(P). Then, we let P ∗= tP ′, which is t copies of P ′. It follows that for any V ∈L(C), Pr(P ∗|V, ϕ) = Pr(P ′|V, ϕt). By Lemma 3, if an alternative e has the strictly lowest Kemeny score for profile P ′, then it the unique alternative that minimizes the Bayesian risk for P ′ and dispersion parameter ϕt, which means that e minimizes the Bayesian risk for P ∗and dispersion parameter ϕ. Let O denote the set of linear orders over C′ that minimizes the Kendall tau distance from P and let k denote this minimum distance. Choose an arbitrary V ′ ∈O. Let V = [b ≻a ≻V ′]. It follows that Kendall(P ′, V ) = 4 + 6k. If there exists W ′ ∈O where c is ranked in the top position, then we let W = [a ≻c ≻b ≻(V ′ −{c})]. We have Kendall(P ′, W) = 2 + 6k. If c is not a Kemeny winner in P, then for any W where d is not ranked in the top position, Kendall(P ′, W) ≥6 + 6k. Therefore, a minimizes the Bayesian risk if and only if c is a Kemeny winner in P, and if c does not minimize the Bayesian risk, then b does. Hence BETTER DECISION (checking if a is better than b) and OPTIMAL BAYESIAN DECISION (checking if a is the optimal alternative) are PNP || -hard. □ We note that OPTIMAL BAYESIAN DECISION in Theorem 3 is equivalent to checking whether a given alternative c is in f 1 B(P). We do not know whether these problems are PNP || -complete. In sharp contrast to f 1 B, the next theorem states that f 2 B under uniform prior is in P. Theorem 4 For any rational number5 ϕ, BETTER BAYESIAN DECISION and OPTIMAL BAYESIAN DECISION for F2 ϕ under uniform prior are in P. The theorem is a corollary of the following stronger theorem that provides a closed-form formula for Bayesian loss for F2 ϕ.6 We recall that for any profile P and any pair of alternatives c, b, that wP (c, b) is the weight on c →b in the weighted majority graph of P. Theorem 5 For F2 ϕ under uniform prior, for any c ∈C and any profile P, RB(P, c) = 1 − Q b̸=c 1 1 + ϕwP (c,b) . The comparisons of Kemeny, f 1 B, and f 2 B are summarized in Table 1. According to the criteria we consider, none of the three outperforms the others. Kemeny does well in normative properties, but does not minimize Bayesian risk under either F1 ϕ or F2 ϕ, and is hard to compute. f 1 B minimizes the Bayesian risk under F1 ϕ, but is hard to compute. We would like to highlight f 2 B, which minimizes the Bayesian risk under F2 ϕ, and more importantly, can be computed in polynomial time despite the similarity between F1 ϕ and F2 ϕ. 6 Asymptotic Comparisons In this section, we ask the following question: as the number of voters, n →∞, what is the probability that Kemeny, f 1 B, and f 2 B choose different winners? We show that when the data is generated from M1 ϕ, all three methods are equal asymptotically almost surely (a.a.s.), that is, they are equal with probability 1 as n →∞. Theorem 6 Let Pn denote a profile of n votes generated i.i.d. from M1 ϕ given W ∈Lc(C). Then, Prn→∞(Kemeny(Pn) = f 1 B(Pn) = f 2 B(Pn) = c) = 1. However, when the data are generated from M2 ϕ, we have a different story. Theorem 7 For any W ∈B(C) and any ϕ, f 1 B(Pn) = Kemeny(Pn) a.a.s. as n →∞and votes in Pn are generated i.i.d. from M2 ϕ given W. For any m ≥5, there exists W ∈B(C) such that for any ϕ, there exists ϵ > 0 such that with probability at least ϵ, f 1 B(Pn) ̸= f 2 B(Pn) and Kemeny(Pn) ̸= f 2 B(Pn) as n →∞and votes in Pn are generated i.i.d. from M2 ϕ given W. 5We require ϕ to be rational to avoid representational issues. 6The formula resembles Young’s calculation for three alternatives [34], where it was not clear whether the calculation was done for F 2 ϕ. Recently it was clarified by Xia [31] that this is indeed the case. 7 c1
c2
c3
c4
c5
(a) W ∈B(C) for m = 5. (b) Probability that g is different from Kemeny under M2 ϕ. Figure 2: The ground truth W and asymptotic comparisons between Kemeny and g in Definition 7. Proof sketch: The first part of Theorem 7 is proved by the Central Limit Theorem. For the second part, the proof for m = 5 uses an acyclic W ∈B(C) illustrated in Figure 2 (a). □ Theorem 6 suggests that, when n is large and the votes are generated from M1 ϕ, it does not matter much which of f 1 B, f 2 B, and Kemeny we use. A similar observation has been made for other voting rules by Caragiannis et al. [7]. On the other hand, Theorem 7 states that when the votes are generated from M2 ϕ, interestingly, for some ground truth parameter, f 2 B is different from the other two with non-negligible probability, and as we will see in the experiments, this probability can be quite large. 6.1 Experiments We focus on the comparison between rule f 2 B and Kemeny using synthetic data generated from M2 ϕ given the binary relation W illustrated in Figure 2 (a). By Theorem 5, the computation involves computing ϕΩ(n), which is exponentially small for large n since ϕ < 1. Hence, we need a special data structure to handle the computation of f 2 B, because a straightforward implementation easily loses precision. In our experiments, we use the following approximation for f 2 B. Definition 7 For any c ∈C and profile P, let s(c, P) = P b:wP (b,c)>0 wP (b, c). Let g be the voting rule such that for any profile P, g(P) = arg minc s(c, P). In words, g selects the alternative c with the minimum total weight on the incoming edges in the WMG. By Theorem 5, the Bayesian risk is largely determined by ϕ−s(c,P ). Therefore, g is a good approximation of f 2 B with reasonably large n. Formally, this is stated in the following theorem. Theorem 8 For any W ∈B(C) and any ϕ, f 2 B(Pn) = g(Pn) a.a.s. as n →∞and votes in Pn are generated i.i.d. from M2 ϕ given W. In our experiments, data are generated by M2 ϕ given W in Figure 2 (a) for m = 5, n ∈ {100, 200, . . . , 2000}, and ϕ ∈{0.1, 0.5, 0.9}. For each setting we generate 3000 profiles, and calculate the fraction of trials in which g and Kemeny are different. The results are shown in Figuire 2 (b). We observe that for ϕ = 0.1 and 0.5, the probability for g(Pn) ̸= Kemeny(Pn) is about 30% for most n in our experiments; when ϕ = 0.9, the probability is about 10%. In light of Theorem 8, these results confirm Theorem 7. We have also conducted similar experiments for M1 ϕ, and found that the g winner is the same as the Kemeny winner in all 10000 randomly generated profiles with m = 5, n = 100. This provides a check for Theorem 6. 7 Acknowledgments We thank Shivani Agarwal, Craig Boutilier, Yiling Chen, Vincent Conitzer, Edith Elkind, Ariel Procaccia, and anonymous reviewers of AAAI-14 and NIPS-14 for helpful suggestions and discussions. Azari Soufiani acknowledges Siebel foundation for the scholarship in his last year of PhD studies. Parkes was supported in part by NSF grant CCF #1301976 and the SEAS TomKat fund. Xia acknowledges an RPI startup fund for support. 8 References [1] David Austen-Smith and Jeffrey S. Banks. Information Aggregation, Rationality, and the Condorcet Jury Theorem. The American Political Science Review, 90(1):34–45, 1996. [2] Hossein Azari Soufiani, David C. Parkes, and Lirong Xia. Random utility theory for social choice. In Proc. NIPS, pages 126–134, 2012. [3] James O. Berger. Statistical Decision Theory and Bayesian Analysis. Springer, 2nd edition, 1985. [4] Craig Boutilier and Tyler Lu. Probabilistic and Utility-theoretic Models in Social Choice: Challenges for Learning, Elicitation, and Manipulation. In IJCAI-11 Workshop on Social Choice and AI, 2011. [5] Craig Boutilier, Ioannis Caragiannis, Simi Haber, Tyler Lu, Ariel D. Procaccia, and Or Sheffet. Optimal social choice functions: A utilitarian view. In Proc. EC, pages 197–214, 2012. [6] Ioannis Caragiannis and Ariel D. Procaccia. Voting Almost Maximizes Social Welfare Despite Limited Communication. Artificial Intelligence, 175(9–10):1655–1671, 2011. [7] Ioannis Caragiannis, Ariel Procaccia, and Nisarg Shah. When do noisy votes reveal the truth? In Proc. EC, 2013. [8] Ioannis Caragiannis, Ariel D. Procaccia, and Nisarg Shah. Modal Ranking: A Uniquely Robust Voting Rule. In Proc. AAAI, 2014. [9] Marquis de Condorcet. Essai sur l’application de l’analyse `a la probabilit´e des d´ecisions rendues `a la pluralit´e des voix. Paris: L’Imprimerie Royale, 1785. [10] Vincent Conitzer and Tuomas Sandholm. Common voting rules as maximum likelihood estimators. In Proc. UAI, pages 145–152, Edinburgh, UK, 2005. [11] Vincent Conitzer, Matthew Rognlie, and Lirong Xia. Preference functions that score rankings and maximum likelihood estimation. In Proc. IJCAI, pages 109–115, 2009. [12] Cynthia Dwork, Ravi Kumar, Moni Naor, and D. Sivakumar. Rank aggregation methods for the web. In Proc. WWW, pages 613–622, 2001. [13] Edith Elkind and Nisarg Shah. How to Pick the Best Alternative Given Noisy Cyclic Preferences? In Proc. UAI, 2014. [14] Peter C. Fishburn. Condorcet social choice functions. SIAM Journal on Applied Mathematics, 33(3): 469–489, 1977. [15] Sumit Ghosh, Manisha Mundhe, Karina Hernandez, and Sandip Sen. Voting for movies: the anatomy of a recommender system. In Proc. AAMAS, pages 434–435, 1999. [16] Edith Hemaspaandra, Holger Spakowski, and J¨org Vogel. The complexity of Kemeny elections. Theoretical Computer Science, 349(3):382–391, December 2005. [17] John Kemeny. Mathematics without numbers. Daedalus, 88:575–591, 1959. [18] Jen-Wei Kuo, Pu-Jen Cheng, and Hsin-Min Wang. Learning to Rank from Bayesian Decision Inference. In Proc. CIKM, pages 827–836, 2009. [19] Bo Long, Olivier Chapelle, Ya Zhang, Yi Chang, Zhaohui Zheng, and Belle Tseng. Active Learning for Ranking Through Expected Loss Optimization. In Proc. SIGIR, pages 267–274, 2010. [20] Tyler Lu and Craig Boutilier. The Unavailable Candidate Model: A Decision-theoretic View of Social Choice. In Proc. EC, pages 263–274, 2010. [21] Tyler Lu and Craig Boutilier. Learning mallows models with pairwise preferences. In Proc. ICML, pages 145–152, 2011. [22] Colin L. Mallows. Non-null ranking model. Biometrika, 44(1/2):114–130, 1957. [23] Andrew Mao, Ariel D. Procaccia, and Yiling Chen. Better human computation through principled voting. In Proc. AAAI, 2013. [24] David C. McGarvey. A theorem on the construction of voting paradoxes. Econometrica, 21(4):608–610, 1953. [25] Shmuel Nitzan and Jacob Paroush. The significance of independent decisions in uncertain dichotomous choice situations. Theory and Decision, 17(1):47–60, 1984. [26] Marcus Pivato. Voting rules as statistical estimators. Social Choice and Welfare, 40(2):581–630, 2013. [27] Daniele Porello and Ulle Endriss. Ontology Merging as Social Choice: Judgment Aggregation under the Open World Assumption. Journal of Logic and Computation, 2013. [28] Ariel D. Procaccia and Jeffrey S. Rosenschein. The Distortion of Cardinal Preferences in Voting. In Proc. CIA, volume 4149 of LNAI, pages 317–331. 2006. [29] Ariel D. Procaccia, Sashank J. Reddi, and Nisarg Shah. A maximum likelihood approach for selecting sets of alternatives. In Proc. UAI, 2012. [30] Abraham Wald. Statistical Decision Function. New York: Wiley, 1950. [31] Lirong Xia. Deciphering young’s interpretation of condorcet’s model. ArXiv, 2014. [32] Lirong Xia and Vincent Conitzer. A maximum likelihood approach towards aggregating partial orders. In Proc. IJCAI, pages 446–451, Barcelona, Catalonia, Spain, 2011. [33] Lirong Xia, Vincent Conitzer, and J´erˆome Lang. Aggregating preferences in multi-issue domains by using maximum likelihood estimators. In Proc. AAMAS, pages 399–406, 2010. [34] H. Peyton Young. Condorcet’s theory of voting. American Political Science Review, 82:1231–1244, 1988. [35] H. Peyton Young and Arthur Levenglick. A consistent extension of Condorcet’s election principle. SIAM Journal of Applied Mathematics, 35(2):285–300, 1978. 9
|
2014
|
141
|
5,227
|
Generative Adversarial Nets Ian J. Goodfellow∗, Jean Pouget-Abadie†, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair‡, Aaron Courville, Yoshua Bengio§ D´epartement d’informatique et de recherche op´erationnelle Universit´e de Montr´eal Montr´eal, QC H3C 3J7 Abstract We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to 1 2 everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples. 1 Introduction The promise of deep learning is to discover rich, hierarchical models [2] that represent probability distributions over the kinds of data encountered in artificial intelligence applications, such as natural images, audio waveforms containing speech, and symbols in natural language corpora. So far, the most striking successes in deep learning have involved discriminative models, usually those that map a high-dimensional, rich sensory input to a class label [14, 20]. These striking successes have primarily been based on the backpropagation and dropout algorithms, using piecewise linear units [17, 8, 9] which have a particularly well-behaved gradient . Deep generative models have had less of an impact, due to the difficulty of approximating many intractable probabilistic computations that arise in maximum likelihood estimation and related strategies, and due to difficulty of leveraging the benefits of piecewise linear units in the generative context. We propose a new generative model estimation procedure that sidesteps these difficulties. 1 In the proposed adversarial nets framework, the generative model is pitted against an adversary: a discriminative model that learns to determine whether a sample is from the model distribution or the data distribution. The generative model can be thought of as analogous to a team of counterfeiters, trying to produce fake currency and use it without detection, while the discriminative model is analogous to the police, trying to detect the counterfeit currency. Competition in this game drives both teams to improve their methods until the counterfeits are indistiguishable from the genuine articles. ∗Ian Goodfellow is now a research scientist at Google, but did this work earlier as a UdeM student †Jean Pouget-Abadie did this work while visiting Universit´e de Montr´eal from Ecole Polytechnique. ‡Sherjil Ozair is visiting Universit´e de Montr´eal from Indian Institute of Technology Delhi §Yoshua Bengio is a CIFAR Senior Fellow. 1All code and hyperparameters available at http://www.github.com/goodfeli/adversarial 1 This framework can yield specific training algorithms for many kinds of model and optimization algorithm. In this article, we explore the special case when the generative model generates samples by passing random noise through a multilayer perceptron, and the discriminative model is also a multilayer perceptron. We refer to this special case as adversarial nets. In this case, we can train both models using only the highly successful backpropagation and dropout algorithms [16] and sample from the generative model using only forward propagation. No approximate inference or Markov chains are necessary. 2 Related work Until recently, most work on deep generative models focused on models that provided a parametric specification of a probability distribution function. The model can then be trained by maximizing the log likelihood. In this family of model, perhaps the most succesful is the deep Boltzmann machine [25]. Such models generally have intractable likelihood functions and therefore require numerous approximations to the likelihood gradient. These difficulties motivated the development of “generative machines”–models that do not explicitly represent the likelihood, yet are able to generate samples from the desired distribution. Generative stochastic networks [4] are an example of a generative machine that can be trained with exact backpropagation rather than the numerous approximations required for Boltzmann machines. This work extends the idea of a generative machine by eliminating the Markov chains used in generative stochastic networks. Our work backpropagates derivatives through generative processes by using the observation that lim σ→0 ∇xEϵ∼N (0,σ2I)f(x + ϵ) = ∇xf(x). We were unaware at the time we developed this work that Kingma and Welling [18] and Rezende et al. [23] had developed more general stochastic backpropagation rules, allowing one to backpropagate through Gaussian distributions with finite variance, and to backpropagate to the covariance parameter as well as the mean. These backpropagation rules could allow one to learn the conditional variance of the generator, which we treated as a hyperparameter in this work. Kingma and Welling [18] and Rezende et al. [23] use stochastic backpropagation to train variational autoencoders (VAEs). Like generative adversarial networks, variational autoencoders pair a differentiable generator network with a second neural network. Unlike generative adversarial networks, the second network in a VAE is a recognition model that performs approximate inference. GANs require differentiation through the visible units, and thus cannot model discrete data, while VAEs require differentiation through the hidden units, and thus cannot have discrete latent variables. Other VAElike approaches exist [12, 22] but are less closely related to our method. Previous work has also taken the approach of using a discriminative criterion to train a generative model [29, 13]. These approaches use criteria that are intractable for deep generative models. These methods are difficult even to approximate for deep models because they involve ratios of probabilities which cannot be approximated using variational approximations that lower bound the probability. Noise-contrastive estimation (NCE) [13] involves training a generative model by learning the weights that make the model useful for discriminating data from a fixed noise distribution. Using a previously trained model as the noise distribution allows training a sequence of models of increasing quality. This can be seen as an informal competition mechanism similar in spirit to the formal competition used in the adversarial networks game. The key limitation of NCE is that its “discriminator” is defined by the ratio of the probability densities of the noise distribution and the model distribution, and thus requires the ability to evaluate and backpropagate through both densities. Some previous work has used the general concept of having two neural networks compete. The most relevant work is predictability minimization [26]. In predictability minimization, each hidden unit in a neural network is trained to be different from the output of a second network, which predicts the value of that hidden unit given the value of all of the other hidden units. This work differs from predictability minimization in three important ways: 1) in this work, the competition between the networks is the sole training criterion, and is sufficient on its own to train the network. Predictability minimization is only a regularizer that encourages the hidden units of a neural network to be statistically independent while they accomplish some other task; it is not a primary training criterion. 2) The nature of the competition is different. In predictability minimization, two networks’ outputs are compared, with one network trying to make the outputs similar and the other trying to make the 2 outputs different. The output in question is a single scalar. In GANs, one network produces a rich, high dimensional vector that is used as the input to another network, and attempts to choose an input that the other network does not know how to process. 3) The specification of the learning process is different. Predictability minimization is described as an optimization problem with an objective function to be minimized, and learning approaches the minimum of the objective function. GANs are based on a minimax game rather than an optimization problem, and have a value function that one agent seeks to maximize and the other seeks to minimize. The game terminates at a saddle point that is a minimum with respect to one player’s strategy and a maximum with respect to the other player’s strategy. Generative adversarial networks has been sometimes confused with the related concept of “adversarial examples” [28]. Adversarial examples are examples found by using gradient-based optimization directly on the input to a classification network, in order to find examples that are similar to the data yet misclassified. This is different from the present work because adversarial examples are not a mechanism for training a generative model. Instead, adversarial examples are primarily an analysis tool for showing that neural networks behave in intriguing ways, often confidently classifying two images differently with high confidence even though the difference between them is imperceptible to a human observer. The existence of such adversarial examples does suggest that generative adversarial network training could be inefficient, because they show that it is possible to make modern discriminative networks confidently recognize a class without emulating any of the human-perceptible attributes of that class. 3 Adversarial nets The adversarial modeling framework is most straightforward to apply when the models are both multilayer perceptrons. To learn the generator’s distribution pg over data x, we define a prior on input noise variables pz(z), then represent a mapping to data space as G(z; θg), where G is a differentiable function represented by a multilayer perceptron with parameters θg. We also define a second multilayer perceptron D(x; θd) that outputs a single scalar. D(x) represents the probability that x came from the data rather than pg. We train D to maximize the probability of assigning the correct label to both training examples and samples from G. We simultaneously train G to minimize log(1 −D(G(z))). In other words, D and G play the following two-player minimax game with value function V (G, D): min G max D V (D, G) = Ex∼pdata(x)[log D(x)] + Ez∼pz(z)[log(1 −D(G(z)))]. (1) In the next section, we present a theoretical analysis of adversarial nets, essentially showing that the training criterion allows one to recover the data generating distribution as G and D are given enough capacity, i.e., in the non-parametric limit. See Figure 1 for a less formal, more pedagogical explanation of the approach. In practice, we must implement the game using an iterative, numerical approach. Optimizing D to completion in the inner loop of training is computationally prohibitive, and on finite datasets would result in overfitting. Instead, we alternate between k steps of optimizing D and one step of optimizing G. This results in D being maintained near its optimal solution, so long as G changes slowly enough. The procedure is formally presented in Algorithm 1. In practice, equation 1 may not provide sufficient gradient for G to learn well. Early in learning, when G is poor, D can reject samples with high confidence because they are clearly different from the training data. In this case, log(1 −D(G(z))) saturates. Rather than training G to minimize log(1 −D(G(z))) we can train G to maximize log D(G(z)). This objective function results in the same fixed point of the dynamics of G and D but provides much stronger gradients early in learning. 4 Theoretical Results The generator G implicitly defines a probability distribution pg as the distribution of the samples G(z) obtained when z ∼pz. Therefore, we would like Algorithm 1 to converge to a good estimator of pdata, if given enough capacity and training time. The results of this section are done in a nonparametric setting, e.g. we represent a model with infinite capacity by studying convergence in the space of probability density functions. We will show in section 4.1 that this minimax game has a global optimum for pg = pdata. We will then show in section 4.2 that Algorithm 1 optimizes Eq 1, thus obtaining the desired result. 3 x z X Z X Z ... X Z (a) (b) (c) (d) Figure 1: Generative adversarial nets are trained by simultaneously updating the discriminative distribution (D, blue, dashed line) so that it discriminates between samples from the data generating distribution (black, dotted line) px from those of the generative distribution pg (G) (green, solid line). The lower horizontal line is the domain from which z is sampled, in this case uniformly. The horizontal line above is part of the domain of x. The upward arrows show how the mapping x = G(z) imposes the non-uniform distribution pg on transformed samples. G contracts in regions of high density and expands in regions of low density of pg. (a) Consider an adversarial pair near convergence: pg is similar to pdata and D is a partially accurate classifier. (b) In the inner loop of the algorithm D is trained to discriminate samples from data, converging to D∗(x) = pdata(x) pdata(x)+pg(x). (c) After an update to G, gradient of D has guided G(z) to flow to regions that are more likely to be classified as data. (d) After several steps of training, if G and D have enough capacity, they will reach a point at which both cannot improve because pg = pdata. The discriminator is unable to differentiate between the two distributions, i.e. D(x) = 1 2. Algorithm 1 Minibatch stochastic gradient descent training of generative adversarial nets. The number of steps to apply to the discriminator, k, is a hyperparameter. We used k = 1, the least expensive option, in our experiments. for number of training iterations do for k steps do • Sample minibatch of m noise samples {z(1), . . . , z(m)} from noise prior pg(z). • Sample minibatch of m examples {x(1), . . . , x(m)} from data generating distribution pdata(x). • Update the discriminator by ascending its stochastic gradient: ∇θd 1 m m X i=1 h log D x(i) + log 1 −D G z(i)i . end for • Sample minibatch of m noise samples {z(1), . . . , z(m)} from noise prior pg(z). • Update the generator by descending its stochastic gradient: ∇θg 1 m m X i=1 log 1 −D G z(i) . end for The gradient-based updates can use any standard gradient-based learning rule. We used momentum in our experiments. 4.1 Global Optimality of pg = pdata We first consider the optimal discriminator D for any given generator G. Proposition 1. For G fixed, the optimal discriminator D is D∗ G(x) = pdata(x) pdata(x) + pg(x) (2) 4 Proof. The training criterion for the discriminator D, given any generator G, is to maximize the quantity V (G, D) V (G, D) = Z x pdata(x) log(D(x))dx + Z z pz(z) log(1 −D(g(z)))dz = Z x pdata(x) log(D(x)) + pg(x) log(1 −D(x))dx (3) For any (a, b) ∈R2 \ {0, 0}, the function y →a log(y) + b log(1 −y) achieves its maximum in [0, 1] at a a+b. The discriminator does not need to be defined outside of Supp(pdata) ∪Supp(pg), concluding the proof. Note that the training objective for D can be interpreted as maximizing the log-likelihood for estimating the conditional probability P(Y = y|x), where Y indicates whether x comes from pdata (with y = 1) or from pg (with y = 0). The minimax game in Eq. 1 can now be reformulated as: C(G) = max D V (G, D) =Ex∼pdata[log D∗ G(x)] + Ez∼pz[log(1 −D∗ G(G(z)))] (4) =Ex∼pdata[log D∗ G(x)] + Ex∼pg[log(1 −D∗ G(x))] =Ex∼pdata log pdata(x) Pdata(x) + pg(x) + Ex∼pg log pg(x) pdata(x) + pg(x) Theorem 1. The global minimum of the virtual training criterion C(G) is achieved if and only if pg = pdata. At that point, C(G) achieves the value −log 4. Proof. For pg = pdata, D∗ G(x) = 1 2, (consider Eq. 2). Hence, by inspecting Eq. 4 at D∗ G(x) = 1 2, we find C(G) = log 1 2 + log 1 2 = −log 4. To see that this is the best possible value of C(G), reached only for pg = pdata, observe that Ex∼pdata [−log 2] + Ex∼pg [−log 2] = −log 4 and that by subtracting this expression from C(G) = V (D∗ G, G), we obtain: C(G) = −log(4) + KL pdata
pdata + pg 2 + KL pg
pdata + pg 2 (5) where KL is the Kullback–Leibler divergence. We recognize in the previous expression the Jensen– Shannon divergence between the model’s distribution and the data generating process: C(G) = −log(4) + 2 · JSD (pdata ∥pg ) (6) Since the Jensen–Shannon divergence between two distributions is always non-negative, and zero iff they are equal, we have shown that C∗= −log(4) is the global minimum of C(G) and that the only solution is pg = pdata, i.e., the generative model perfectly replicating the data distribution. 4.2 Convergence of Algorithm 1 Proposition 2. If G and D have enough capacity, and at each step of Algorithm 1, the discriminator is allowed to reach its optimum given G, and pg is updated so as to improve the criterion Ex∼pdata[log D∗ G(x)] + Ex∼pg[log(1 −D∗ G(x))] then pg converges to pdata Proof. Consider V (G, D) = U(pg, D) as a function of pg as done in the above criterion. Note that U(pg, D) is convex in pg. The subderivatives of a supremum of convex functions include the derivative of the function at the point where the maximum is attained. In other words, if f(x) = supα∈A fα(x) and fα(x) is convex in x for every α, then ∂fβ(x) ∈∂f if β = arg supα∈A fα(x). This is equivalent to computing a gradient descent update for pg at the optimal D given the corresponding G. supD U(pg, D) is convex in pg with a unique global optima as proven in Thm 1, therefore with sufficiently small updates of pg, pg converges to px, concluding the proof. In practice, adversarial nets represent a limited family of pg distributions via the function G(z; θg), and we optimize θg rather than pg itself, so the proofs do not apply. However, the excellent performance of multilayer perceptrons in practice suggests that they are a reasonable model to use despite their lack of theoretical guarantees. 5 Model MNIST TFD DBN [3] 138 ± 2 1909 ± 66 Stacked CAE [3] 121 ± 1.6 2110 ± 50 Deep GSN [5] 214 ± 1.1 1890 ± 29 Adversarial nets 225 ± 2 2057 ± 26 Table 1: Parzen window-based log-likelihood estimates. The reported numbers on MNIST are the mean loglikelihood of samples on test set, with the standard error of the mean computed across examples. On TFD, we computed the standard error across folds of the dataset, with a different σ chosen using the validation set of each fold. On TFD, σ was cross validated on each fold and mean log-likelihood on each fold were computed. For MNIST we compare against other models of the real-valued (rather than binary) version of dataset. 5 Experiments We trained adversarial nets an a range of datasets including MNIST[21], the Toronto Face Database (TFD) [27], and CIFAR-10 [19]. The generator nets used a mixture of rectifier linear activations [17, 8] and sigmoid activations, while the discriminator net used maxout [9] activations. Dropout [16] was applied in training the discriminator net. While our theoretical framework permits the use of dropout and other noise at intermediate layers of the generator, we used noise as the input to only the bottommost layer of the generator network. We estimate probability of the test set data under pg by fitting a Gaussian Parzen window to the samples generated with G and reporting the log-likelihood under this distribution. The σ parameter of the Gaussians was obtained by cross validation on the validation set. This procedure was introduced in Breuleux et al. [7] and used for various generative models for which the exact likelihood is not tractable [24, 3, 4]. Results are reported in Table 1. This method of estimating the likelihood has somewhat high variance and does not perform well in high dimensional spaces but it is the best method available to our knowledge. Advances in generative models that can sample but not estimate likelihood directly motivate further research into how to evaluate such models. In Figures 2 and 3 we show samples drawn from the generator net after training. While we make no claim that these samples are better than samples generated by existing methods, we believe that these samples are at least competitive with the better generative models in the literature and highlight the potential of the adversarial framework. 6 Advantages and disadvantages This new framework comes with advantages and disadvantages relative to previous modeling frameworks. The disadvantages are primarily that there is no explicit representation of pg(x), and that D must be synchronized well with G during training (in particular, G must not be trained too much without updating D, in order to avoid “the Helvetica scenario” in which G collapses too many values of z to the same value of x to have enough diversity to model pdata), much as the negative chains of a Boltzmann machine must be kept up to date between learning steps. The advantages are that Markov chains are never needed, only backprop is used to obtain gradients, no inference is needed during learning, and a wide variety of functions can be incorporated into the model. Table 2 summarizes the comparison of generative adversarial nets with other generative modeling approaches. The aforementioned advantages are primarily computational. Adversarial models may also gain some statistical advantage from the generator network not being updated directly with data examples, but only with gradients flowing through the discriminator. This means that components of the input are not copied directly into the generator’s parameters. Another advantage of adversarial networks is that they can represent very sharp, even degenerate distributions, while methods based on Markov chains require that the distribution be somewhat blurry in order for the chains to be able to mix between modes. 7 Conclusions and future work This framework admits many straightforward extensions: 6 a) b) c) d) Figure 2: Visualization of samples from the model. Rightmost column shows the nearest training example of the neighboring sample, in order to demonstrate that the model has not memorized the training set. Samples are fair random draws, not cherry-picked. Unlike most other visualizations of deep generative models, these images show actual samples from the model distributions, not conditional means given samples of hidden units. Moreover, these samples are uncorrelated because the sampling process does not depend on Markov chain mixing. a) MNIST b) TFD c) CIFAR-10 (fully connected model) d) CIFAR-10 (convolutional discriminator and “deconvolutional” generator) Figure 3: Digits obtained by linearly interpolating between coordinates in z space of the full model. 1. A conditional generative model p(x | c) can be obtained by adding c as input to both G and D. 2. Learned approximate inference can be performed by training an auxiliary network to predict z given x. This is similar to the inference net trained by the wake-sleep algorithm [15] but with the advantage that the inference net may be trained for a fixed generator net after the generator net has finished training. 3. One can approximately model all conditionals p(xS | x̸S) where S is a subset of the indices of x by training a family of conditional models that share parameters. Essentially, one can use adversarial nets to implement a stochastic extension of the deterministic MP-DBM [10]. 4. Semi-supervised learning: features from the discriminator or inference net could improve performance of classifiers when limited labeled data is available. 5. Efficiency improvements: training could be accelerated greatly by devising better methods for coordinating G and D or determining better distributions to sample z from during training. This paper has demonstrated the viability of the adversarial modeling framework, suggesting that these research directions could prove useful. 7 Deep directed graphical models Deep undirected graphical models Generative autoencoders Adversarial models Training Inference needed during training. Inference needed during training. MCMC needed to approximate partition function gradient. Enforced tradeoff between mixing and power of reconstruction generation Synchronizing the discriminator with the generator. Helvetica. Inference Learned approximate inference Variational inference MCMC-based inference Learned approximate inference Sampling No difficulties Requires Markov chain Requires Markov chain No difficulties Evaluating p(x) Intractable, may be approximated with AIS Intractable, may be approximated with AIS Not explicitly represented, may be approximated with Parzen density estimation Not explicitly represented, may be approximated with Parzen density estimation Model design Models need to be designed to work with the desired inference scheme — some inference schemes support similar model families as GANs Careful design needed to ensure multiple properties Any differentiable function is theoretically permitted Any differentiable function is theoretically permitted Table 2: Challenges in generative modeling: a summary of the difficulties encountered by different approaches to deep generative modeling for each of the major operations involving a model. Acknowledgments We would like to acknowledge Patrice Marcotte, Olivier Delalleau, Kyunghyun Cho, Guillaume Alain and Jason Yosinski for helpful discussions. Yann Dauphin shared his Parzen window evaluation code with us. We would like to thank the developers of Pylearn2 [11] and Theano [6, 1], particularly Fr´ed´eric Bastien who rushed a Theano feature specifically to benefit this project. Arnaud Bergeron provided much-needed support with LATEX typesetting. We would also like to thank CIFAR, and Canada Research Chairs for funding, and Compute Canada, and Calcul Qu´ebec for providing computational resources. Ian Goodfellow is supported by the 2013 Google Fellowship in Deep Learning. Finally, we would like to thank Les Trois Brasseurs for stimulating our creativity. References [1] Bastien, F., Lamblin, P., Pascanu, R., Bergstra, J., Goodfellow, I. J., Bergeron, A., Bouchard, N., and Bengio, Y. (2012). Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop. [2] Bengio, Y. (2009). Learning deep architectures for AI. Now Publishers. [3] Bengio, Y., Mesnil, G., Dauphin, Y., and Rifai, S. (2013). Better mixing via deep representations. In ICML’13. [4] Bengio, Y., Thibodeau-Laufer, E., and Yosinski, J. (2014a). Deep generative stochastic networks trainable by backprop. In ICML’14. [5] Bengio, Y., Thibodeau-Laufer, E., Alain, G., and Yosinski, J. (2014b). Deep generative stochastic networks trainable by backprop. In Proceedings of the 30th International Conference on Machine Learning (ICML’14). [6] Bergstra, J., Breuleux, O., Bastien, F., Lamblin, P., Pascanu, R., Desjardins, G., Turian, J., Warde-Farley, D., and Bengio, Y. (2010). Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference (SciPy). Oral Presentation. [7] Breuleux, O., Bengio, Y., and Vincent, P. (2011). Quickly generating representative samples from an RBM-derived process. Neural Computation, 23(8), 2053–2073. [8] Glorot, X., Bordes, A., and Bengio, Y. (2011). Deep sparse rectifier neural networks. In AISTATS’2011. 8 [9] Goodfellow, I. J., Warde-Farley, D., Mirza, M., Courville, A., and Bengio, Y. (2013a). Maxout networks. In ICML’2013. [10] Goodfellow, I. J., Mirza, M., Courville, A., and Bengio, Y. (2013b). Multi-prediction deep Boltzmann machines. In NIPS’2013. [11] Goodfellow, I. J., Warde-Farley, D., Lamblin, P., Dumoulin, V., Mirza, M., Pascanu, R., Bergstra, J., Bastien, F., and Bengio, Y. (2013c). Pylearn2: a machine learning research library. arXiv preprint arXiv:1308.4214. [12] Gregor, K., Danihelka, I., Mnih, A., Blundell, C., and Wierstra, D. (2014). Deep autoregressive networks. In ICML’2014. [13] Gutmann, M. and Hyvarinen, A. (2010). Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of The Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS’10). [14] Hinton, G., Deng, L., Dahl, G. E., Mohamed, A., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T., and Kingsbury, B. (2012a). Deep neural networks for acoustic modeling in speech recognition. IEEE Signal Processing Magazine, 29(6), 82–97. [15] Hinton, G. E., Dayan, P., Frey, B. J., and Neal, R. M. (1995). The wake-sleep algorithm for unsupervised neural networks. Science, 268, 1558–1161. [16] Hinton, G. E., Srivastava, N., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. (2012b). Improving neural networks by preventing co-adaptation of feature detectors. Technical report, arXiv:1207.0580. [17] Jarrett, K., Kavukcuoglu, K., Ranzato, M., and LeCun, Y. (2009). What is the best multi-stage architecture for object recognition? In Proc. International Conference on Computer Vision (ICCV’09), pages 2146–2153. IEEE. [18] Kingma, D. P. and Welling, M. (2014). Auto-encoding variational bayes. In Proceedings of the International Conference on Learning Representations (ICLR). [19] Krizhevsky, A. and Hinton, G. (2009). Learning multiple layers of features from tiny images. Technical report, University of Toronto. [20] Krizhevsky, A., Sutskever, I., and Hinton, G. (2012). ImageNet classification with deep convolutional neural networks. In NIPS’2012. [21] LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278–2324. [22] Mnih, A. and Gregor, K. (2014). Neural variational inference and learning in belief networks. Technical report, arXiv preprint arXiv:1402.0030. [23] Rezende, D. J., Mohamed, S., and Wierstra, D. (2014). Stochastic backpropagation and approximate inference in deep generative models. Technical report, arXiv:1401.4082. [24] Rifai, S., Bengio, Y., Dauphin, Y., and Vincent, P. (2012). A generative process for sampling contractive auto-encoders. In ICML’12. [25] Salakhutdinov, R. and Hinton, G. E. (2009). Deep Boltzmann machines. In AISTATS’2009, pages 448– 455. [26] Schmidhuber, J. (1992). Learning factorial codes by predictability minimization. Neural Computation, 4(6), 863–879. [27] Susskind, J., Anderson, A., and Hinton, G. E. (2010). The Toronto face dataset. Technical Report UTML TR 2010-001, U. Toronto. [28] Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I. J., and Fergus, R. (2014). Intriguing properties of neural networks. ICLR, abs/1312.6199. [29] Tu, Z. (2007). Learning generative models via discriminative approaches. In Computer Vision and Pattern Recognition, 2007. CVPR’07. IEEE Conference on, pages 1–8. IEEE. 9
|
2014
|
142
|
5,228
|
Delay-Tolerant Algorithms for Asynchronous Distributed Online Learning H. Brendan McMahan Google, Inc. Seattle, WA mcmahan@google.com Matthew Streeter Duolingo, Inc.∗ Pittsburgh, PA matt@duolingo.com Abstract We analyze new online gradient descent algorithms for distributed systems with large delays between gradient computations and the corresponding updates. Using insights from adaptive gradient methods, we develop algorithms that adapt not only to the sequence of gradients, but also to the precise update delays that occur. We first give an impractical algorithm that achieves a regret bound that precisely quantifies the impact of the delays. We then analyze AdaptiveRevision, an algorithm that is efficiently implementable and achieves comparable guarantees. The key algorithmic technique is appropriately and efficiently revising the learning rate used for previous gradient steps. Experimental results show when the delays grow large (1000 updates or more), our new algorithms perform significantly better than standard adaptive gradient methods. 1 Introduction Stochastic and online gradient descent methods have proved to be extremely useful for solving largescale machine learning problems [1, 2, 3, 4]. Recently, there has been much work on extending these algorithms to parallel and distributed systems [5, 6, 7, 8, 9]. In particular, Recht et al. [10] and Duchi et al. [11] have shown that standard stochastic algorithms essentially “work” even when updates are applied asynchronously by many threads. Our experiments confirm this for moderate amounts of parallelism (say 100 threads), but show that for large amounts of parallelism (as in a distributed system, with say 1000 threads spread over many machines), performance can degrade significantly. To address this, we develop new algorithms that adapt to both the data and the amount of parallelism. Adaptive gradient (AdaGrad) methods [12, 13] have proved remarkably effective for real-world problems, particularly on sparse data (for example, text classification with bag-of-words features). The key idea behind these algorithms is to prove a general regret bound in terms of an arbitrary sequence of non-increasing learning rates and the full sequence of gradients, and then to define an adaptive method for choosing the learning rates as a function of the gradients seen so far, so as to minimize the final bound when the learning rates are plugged in. We extend this idea to the parallel setting, by developing a general regret bound that depends on both the gradients and the exact update delays that occur (rather than say an upper bound on delays). We then present AdaptiveRevision, an algorithm for choosing learning rates and efficiently revising past learning-rate choices that strives to minimize this bound. In addition to providing an adaptive regret bound (which recovers the standard AdaGrad bound in the case of no delays), we demonstrate excellent empirical performance. Problem Setting and Notation We consider a computation model where one or more computation units (a thread in a parallel implementation or a full machine in a distributed system) store and ∗Work performed while at Google, Inc. 1 update the model x ∈Rn, and another larger set of computation units perform feature extraction and prediction. We call the first type the Updaters (since they apply the gradient updates) and the second type the Readers (since they read coefficients stored by the Updaters). Because the Readers and Updaters may reside on different machines, perhaps located in different parts of the world, communication between them is not instantaneous. Thus, when making a prediction, a Reader will generally be using a coefficient vector that is somewhat stale relative to the most recent version being served by the Updaters. As one application of this model, consider the problem of predicting click-through rates for sponsored search ads using a generalized linear model [14, 15]. While the coefficient vector may be stored and updated centrally, predictions must be available in milliseconds in any part of the world. This leads naturally to an architecture in which a large number of Readers maintain local copies of the coefficient vector, sending updates to the Updaters and periodically requesting fresh coefficients from them. As another application, this model encompasses the Parameter Server/ Model Replica split of Downpour SGD [16]. Our bounds apply to general online convex optimization [4], which encompasses the problem of predicting with a generalized linear model (models where the prediction is a function of at · xt, where at is a feature vector and xt are model coefficients). We analyze the algorithm on a sequence of τ = 1, ..., T rounds; for the moment, we index rounds based on when each prediction is made. On each round, a convex loss function fτ arrives at a Reader, the Reader predicts with xτ ∈Rn and incurs loss fτ(xτ). The Reader then computes a subgradient gτ ∈∂fτ(xτ). For each coordinate i where gτ,i is nonzero, the Reader sends an update to the Updater(s) for those coefficients. We are particularly concerned with sparse data, where n is very large, say 106 −109, but any particular training example has only a small fraction of the features at,i that take non-zero values. The regret against a comparator x∗∈Rn is Regret(x∗) ≡ T X τ=1 fτ(xτ) −fτ(x∗). (1) Our primary theoretical contributions are upper bounds on the regret of our algorithms. We assume a fully asynchronous model, where the delays in the read requests and update requests can be different for different coefficients even for the same training event. This leads to a combinatorial explosion in potential interleavings of these operations, making fine-grained adaptive analysis quite difficult. Our primary technique for addressing this will be the linearization of loss functions, a standard tool in online convex optimization which takes on increased importance in the parallel setting. An immediate consequence of convexity is that given a general convex loss function fτ, with gτ ∈∂fτ(xτ), for any x∗, we have fτ(xτ) −fτ(x∗) ≤gτ · (xτ −x∗). One of the key observations of Zinkevich [1] is that by plugging this inequality into (1), we see that if we can guarantee low regret against linear functions, we can provide the same guarantees against arbitrary convex functions. Further, expanding the dot products and re-arranging the sum, we can write Regret(x∗) ≡ n X i=1 Regreti(x∗ i ) where Regreti(x∗ i ) = T X τ=1 gτ,i(xτ,i −x∗ i ). (2) If we consider algorithms where the updates are also coordinate decomposable (that is, the update to coordinate i can be applied independently of the update of coordinate j), then we can bound Regret(x∗) by proving a per-coordinate bound for linear functions and then summing across coordinates. In fact, our computation architecture already assumes a coordinate decomposable algorithm since this lets us avoid synchronizing the Updates, and so in addition to leading to more efficient algorithms, this approach will greatly simplify the analysis. The proofs of Duchi et al. [11] take a similar approach. Bounding per-coordinate regret Given the above, we will design and analyze asynchronous onedimensional algorithms which can be run independently on each coordinate of the true learning problem. For each coordinate, each Read and Update is assumed to be an atomic operation. It will be critical to adopt an indexing scheme different than the prediction-based indexing τ used above. The net result will be bounding the sum of (2), but we will actually re-order the sum to make the analysis easier. Critically, this ordering could be different for different coordinates, and 2 so considering one coordinate at a time simplifies the analysis considerably.1 We index time by the order of the Updates, so the index t is such that gt is the gradient associated with the tth update applied and xt is the value of the coefficient immediately before the update for gt is applied. Then, the Online Gradient Descent (OGD) update consists of exactly the assumed-atomic operation xt+1 = xt −ηtgt, (3) where ηt is a learning-rate. Let r(t) ∈{1, . . . , t} be the index such that xr(t) was the value of the coefficient used by the Reader to compute gt (and to predict on the corresponding example). That is, update r(t) −1 completed before the Read for gt, but update r(t) completed after. Thus, our loss (for coordinate i) is gtxr(t), and we desire a bound on Regreti(x∗) = T X t=1 gt(xr(t) −x∗). Main result and related work We say an update s is outstanding at time t if the Read for Update s occurs before update t, but the Update occurs after: precisely, s is outstanding at t if r(s) ≤t < s. We let Ft ≡{s | r(s) ≤t < s} be the set of updates outstanding at time t. We call the sum of these gradients the forward gradient sum, gfwd t ≡P s∈Ft gs. Then, ignoring constant factors and terms independent of T, we show that AdaptiveRevision has a per-coordinate bound of the form Regret ≤ v u u t T X t=1 g2 t + gtgfwd t . (4) Theorem 3 gives the precise result as well as the n-dimensional version. Observe that without any delays, gfwd t = 0, and we arrive at the standard AdaGrad-style bound. To prove the bound for AdaptiveRevision, we require an additional InOrder assumption on the delays, namely that for any indexes s1 and s2, if r(s1) < r(s2) then s1 < s2. This assumption should be approximately satisfied most of the time for realistic delay distributions, and even under a more pathological delay distributions (delays uniform on {0, . . . , m} rather than more tightly grouped around a mean delay), our experiments show excellent performance for AdaptiveRevision. The key challenge is that unlike in the AdaGrad case, conceptually we need to know gradients that have not yet been computed in order to calculate the optimal learning rate. We surmount this by using an algorithm that not only chooses learning rates adaptively, but also revises previous gradient steps. Critically, these revisions require only moderate additional storage and network cost: we store a sum of gradients along with each coefficient, and for each Read, we remember the value of this gradient sum at the time of the Read until the corresponding Update occurs. This later storage can essentially be implemented on the network, if the gradient sum is sent from the Updater to the Reader and back again, ensuring it is available exactly when needed. This is the approach taken in the pseudocode of Algorithm 1. Against a true adversary and a maximum delay of m, in general we cannot do better than just training synchronously on a single machine using a 1/m fraction of the data. Our results surmount this issue by producing strongly data-dependent bounds: we do not expect fully adversarial gradients and delays in practice, and so on real data the bound we prove still gives interesting results. In fact, we can essentially recover the guarantees for AsyncAdaGrad from Duchi et al. [11], which rely on stochastic assumptions on the sparsity of the data, by applying the same assumptions to our bound. To simplify the comparison, WLOG we consider a 1-dimensional problem where ∥x∗∥2 = 1, ∥gt∥2 ≤1, and we have the stochastic assumption that each gt is exactly 0 independently with probability p (implying Mj = 1, M = 1, and M2 = p in their notation). Then, simple calculations (given in Appendix B) show our bound for AdaptiveRevision implies a bound on expected regret of O p (1 + mp)pT without knowledge of p or m, ignoring terms independent of T.2 AsyncAdaGrad achieves the same bound, but critically this requires knowledge of both p and 1Our analysis could be extended to non-coordinate-decomposable algorithms, but then the full gradient update across all coordinates would need to be atomic. This case is less interesting due to the computational overhead. 2In the analysis, we choose the parameter G0 based on an upper bound m on the delay, but this only impacts an additive term independent of T. 3 m in advance in order to tune the learning rate appropriately (in the general n-dimensional case, this would mean knowing not just one parameter p, but a separate sparsity parameter pj for each coordinate, and then using an appropriate per-coordinate scaling of the learning rate depending on this); without such knowledge, AsyncAdaGrad only obtains the much worse bound O (1 + mp)√pT . AdaptiveRevision will also provide significantly better guarantees if most of the delays are much less than the maximum, or if the data is only approximately sparse (e.g., many gt = 10−6 rather than exactly 0). The above analysis also makes a worst-case assumption on the gtgfwd t terms, but in practice many gradients in gfwd t are likely to have opposite signs and cancel out, a fact our algorithm and bounds can exploit. 2 Algorithms and Analysis We first introduce some additional definitions. Let o(t) ≡max Ft ∪{t}, the index of the highest update outstanding at time t, or t itself if nothing is outstanding. The sets Ft fully specify the delay pattern. In light of (4), we further define Gfwd t ≡g2 t + 2gtgfwd t . We also define Bt, the set of updates applied while update t was outstanding. Under our notation, this set is easily defined as Bt = {r(t), . . . , t −1} (or the empty set if r(t) = t, so in particular B1 = ∅). We will also frequently use the backward gradient sum, gbck t ≡Pt−1 s=r(t) gs. These vectors most often appear in the products Gbck t ≡g2 t + 2gtgbck t . Figure 3 in Appendix A shows a variety of delay patterns and gives a visual representation of the sums Gfwd and Gbck. We say the delay is (upper) bounded by m if t −r(t) ≤m for all t, which implies |Ft| ≤m and |Bt| ≤m. Note that if m = 0 then r(t) = t. We use the compressed summation notation c1:t ≡Pt s=1 cs for vectors, scalars, and functions. Our analysis builds on the following simple but fundamental result (Appendix C contains all proofs and lemmas omitted here). Lemma 1. Given any non-increasing learning-rate schedule ηt, define σt where σ1 = 1/η1 and σt = 1/ηt −1/ηt−1 for t > 1, so ηt = 1/σ1:t. Then, for any delay schedule, unprojected online gradient descent achieves, for any x∗∈R, Regret(x∗) ≤(2RT )2 2ηT + 1 2 T X t=1 ηtGfwd t where (2RT )2 ≡ T X t=1 σt σ1:T |x∗−xt|2. Proof. Given how we have indexed time, we can consider the regret of a hypothetical online gradient descent algorithm that plays xt and then observes gt, since this corresponds exactly to the update (3). We can then bound regret for this hypothetical setting using a simple modification to standard bound for OGD [1], T X t=1 gt · xt −g1:T · x∗≤ T X t=1 σt 2 |x∗−xt|2 + 1 2 T X t=1 ηtg2 t . The actual algorithm used xr(t) to predict on gt, not xt, so we can bound its Regret by Regret ≤(2RT )2 2ηT + 1 2 T X t=1 ηtg2 t + T X t=1 gt(xr(t) −xt). (5) Recalling xt+1 = xt −ηtgt, observe that xr(t) −xt = Pt−1 s=r(t) ηsgs, = P s∈Bt ηsgs and so T X t=1 gt(xr(t) −xt) = T X t=1 gt X s∈Bt ηsgs = T X s=1 ηsgs X t∈Fs gt = T X s=1 ηsgsgfwd s , using Lemma 4(E) from the Appendix to re-order the sum. Plugging into (5) completes the proof. For projected online gradient descent, by projecting onto a feasible set of radius R and assuming x∗is in this set, we immediately get |x∗−xt| ≤2R. Without projecting, we get a more adaptive bound which depends on the weighted quadratic mean 2RT . Though less standard, we choose to 4 analyze the unprojected variant of the algorithm for two reasons. First, our analysis rests heavily on the ability to represent points played by our algorithms exactly as weighted sums of past gradients, a property not preserved when projection is invoked. More importantly, we know of no experiments on real-world prediction problems (where any x ∈Rn is a valid model) where the projected algorithm actually performs better. In our experience, once the learning-rate schedule is tuned appropriately, the resulting RT values will not be more than a constant factor of ∥x∗∥. This makes intuitive sense in the stochastic case, where it is known that averages of the xt should in fact converge to x∗.3 For learning rate tuning we assume we know in advance a constant ˜R such that RT ≤˜R; again, in practice this is roughly equivalent to assuming we know ∥x∗∥in advance in order to choose the feasible set. Our first algorithm, HypFwd (for Hypothetical-Forward), assumes it has knowledge of all the gradients, so it can optimize its learning rates to minimize the above bound. If there are no delays, that is, gfwd t = 0 for all t, then this immediately gives rise to a standard AdaGrad-style online gradient descent method. If there are delays, the Gfwd t terms could be large, implying the optimal learning rates should be smaller. Unfortunately, it is impossible for a real algorithm to know gfwd t when ηt is chosen. To work toward a practical algorithm, we introduce HypBack, which achieves similar guarantees (but is still impractical). Finally, we introduce AdaptiveRevision, which plays points very similar to HypBack, but can be implemented efficiently. Since we will need non-increasing learning rates, it will be useful to define ˜Gbck 1:t ≡maxs≤t Gbck 1:s and ˜Gfwd 1:t ≡maxs≤t Gfwd 1:s . In practice, we expect ˜Gbck 1:T to be close to Gbck 1:T . We assume WLOG that Gfwd 1 > 0, which at worst adds a negligible additive constant to our regret. Algorithm HypFwd This algorithm “cheats” by using the forward sum gfwd t to choose ηt, ηt = α q ˜Gfwd 1:t (6) for an appropriate scaling parameter α > 0. Then, Lemma 1 combined with the technical inequality of Corollary 10 (given in Appendix D) gives Regret ≤2 √ 2 ˜R q ˜Gfwd 1:T . (7) when we take α = √ 2 ˜R (recalling ˜R ≥RT ). If there are no delays, this bound reduces to the standard bound 2 √ 2 ˜R qPT t=1 g2 t . With delays, however, this is a hypothetical algorithm, because it is generally not possible to know gfwd t when update t is applied. However, we can implement this algorithm efficiently in a single-machine simulation, and it performs very well (see Section 3). Thus, our goal is to find an efficiently implementable algorithm that achieves comparable results in practice and also matches this regret bound. Algorithm HypBack The next step in the analysis is to show that a second hypothetical algorithm, HypBack, approximates the regret bound of (7). This algorithm plays ˆxt+1 = − t X s=1 ˆηsgs where ˆηt = α q ˜Gbck 1:o(t) + G0 (8) is a learning rate with parameters α and G0. This is a hypothetical algorithm, since we also can’t (efficiently) know Gbck 1:o(t) on round t. We prove the following guarantee: Lemma 2. Suppose delays bounded by m and |gt| ≤L. Then when the InOrder property holds, HypBack with α = √ 2 ˜R and G0 = m2L2 has Regret ≤2 √ 2 ˜R q ˜Gfwd 1:T + 2 ˜RmL. 3For example, the arguments of Nemirovski et al. [17, Sec 2.2] hold for unprojected gradient descent. 5 Algorithm 1 Algorithm AdaptiveRevision Procedure Read(loss function f): Read (xi, ¯gi) from the Updaters for all necessary coordinates Calculate a subgradient g ∈∂f(x) for each coordinate i with a non-zero gradient do Send an update tuple (g ←gi, ¯gold ←¯gi) to the Updater for coordinate i Procedure Update(g, ¯gold): The Updater initializes state (¯g ←0, z ←1, z′ ←1, x ←0) per coordinate. Do the following atomically: gbck ←¯g −¯gold For analysis, assign index t to the current update. ηold ← α √ z′ Invariant: effective η for all gbck. z ←z + g2 + 2g · gbck; z′ ←max(z, z′) Maintain z = Gbck 1:t and z′ = ˜Gbck 1:t , to enforce non-increasing η. η ← α √ z′ New learning rate. x ←x −ηg The main gradient-descent update. x ←x + (ηold −η)gbck Apply adaptive revision of some previous steps. ¯g ←¯g + g Maintain ¯g = g1:t. Algorithm AdaptiveRevision Now that we have shown that HypBack is effective, we can describe AdaptiveRevision, which efficiently approximates HypBack. We then analyze this new algorithm by showing its loss is close to the loss of HypBack. Pseudo-code for the algorithm as implemented for the experiments is given in Algorithm 1; we now give an equivalent expression for the algorithm under the InOrder assumption. Let βt be the learning rate based on ˜Gbck 1:t , βt = α/ q ˜Gbck 1:t + G0. Then, AdaptiveRevision plays the points xt+1 = t X s=1 ηt sgs where ηt s = βmin(t,o(s)). (9) When s << t then we will usually have min(t, o(s)) = o(s), and so we see that ηt s = βo(s) = ˆηs, and so the effective learning rate applied to gradient gs is the same one HypBack would have used (namely ˆηs); thus, the only difference between AdaptiveRevision and HypBack is on the leading edge, where o(s) > t. See Figure 4 in Appendix A for an example. When InOrder holds, Lemma 6 (in Appendix C) shows Algorithm 1 plays the points specified by (9). Given Lemma 2, it is sufficient to show that the difference between the loss of HypBack and the loss of AdaptiveRevision is small. Lemma 8 (in the appendix) accomplishes this, showing that under the InOrder assumption and with G0 = m2L2 the difference in loss is at most 2αLm (a quantity independent of T). Our main theorem is then a direct consequence of Lemma 2 and Lemma 8: Theorem 3. Under an InOrder delay pattern with a maximum delay of at most m, the AdaptiveRevision algorithm guarantees Regret ≤2 √ 2 ˜R q ˜Gfwd 1:T + (2 √ 2 + 2) ˜RmL when we take G0 = m2L2 and α = √ 2 ˜R. Applied on a per-coordinate basis to an n-dimensional problem, we have Regret ≤2 √ 2 ˜R n X i=1 v u u t T X t=1 g2 t,i + 2 X s∈Ft,i gs,igs,i + n(2 √ 2 + 2) ˜RmL. We note the n-dimensional guarantee is at most O n ˜RL √ Tm , which matches the lower bound for the feasible set [−R, R]n and gt ∈[−L, L]n up to the difference between ˜R and R (see, for example, Langford et al. [18]).4 Our point, of course, is that for real data our bound will often be much much better. 4To compare to regret bounds stated in terms of L2 bounds on the feasible set and the gradients, note for gt ∈[−L, L]n we have ∥gt∥2 ≤√nL, and similarly for x ∈[−R, R]n we have ∥x∥2 ≤√nR, so the dependence on n is a necessary consequence of using these norms, which are quite natural for sparse problems. 6 Figure 1: Accuracy as a function of update delays, with learning rate scale factors optimized for each algorithm and dataset for the zero delay case. The x-axis is non-linear. The results are qualitatively similar across the plots, but note the differences in the y-axis ranges. In particular, the random delay pattern appears to hurt performance significantly less than either the minibatch or constant delay patterns. Figure 2: Accuracy as a function of update delays, with learning rate scale factors optimized as a function of the delay. The lower plot in each group shows the best learning rate scale α on a log-scale. 3 Experiments We study the performance of both hypothetical algorithms and AdaptiveRevision on two realworld medium-sized datasets. We simulate the update delays using an update queue, which allows us to implement the hypothetical algorithms and also lets us precisely control both the exact delays as well as the delay pattern. We compare to the dual-averaging AsyncAdaGrad algorithm of Duchi et al. [11] (AsyncAda-DA in the figures), as well as asynchronous AdaGrad gradient descent (AsyncAda-GD), which can be thought of as AdaptiveRevision with all gbck set to zero and no revision step. As analyzed, AdaptiveRevision stores an extra variable (z′) in order to enforce a non-increasing learning rate. In practice, we found this had a negligible impact; in the plots above, AdaptiveRevision∗denotes the algorithm without this check. With this improvement AdaptiveRevision stores three numbers per coefficient, versus the two stored by AsyncAdagrad DA or GD. We consider three different delay patterns, which we parameterize by D, the average delay; this yields a more fair comparison across the delay patterns than using the the maximum delay m. We consider: 1) constant delays, where all updates (except at the beginning and the end of the dataset) have a delay of exactly D (e.g., rows (B) and (C) in Figure 3 in the Appendix); 2) A minibatch delay pattern5, where 2D +1 Reads occur, followed by 2D +1 Updates; and 3) a random delay pattern, where the delays are chosen uniformly from the set {0, . . . , 2D}, so again the mean delay is D. The first two patterns satisfy InOrder, but the third does not. 5It is straightforward to show that under this delay pattern, when we do not enforcing non-increasing learning rates, AdaptiveRevision and HypBack are in fact equivalent to standard AdaGrad run on the minibatches (that is, with one update per minibatch using the combined minibatch gradient sum). 7 We evaluate on two datasets. The first is a web search advertising dataset from a large search engine. The dataset consists of about 3.1×106 training examples with a large number of sparse anonymized features based on the ad and query text. Each example is labeled {−1, 1} based on whether or not the person doing the query clicked on the ad. The second is a shuffled version of the malicious URL dataset as described by Ma et al. [19] (2.4×106 examples, 3.2×106 features).6 For each of these datasets we trained a logistic regression model, and evaluated using the logistic loss (LogLoss). That is, for an example with feature vector a ∈Rn and label y ∈{−1, 1}, the loss is given by ℓ(x, (a, y)) = log(1 + exp(−y a · x)). Following the spirit of our regret bounds, we evaluate the models online, making a single pass over the data and computing accuracy metrics on the predictions made by the model immediately before it trained on each example (i.e., progressive validation). To avoid possible transient behavior, we only report metrics for the predictions on the second half of each dataset, though this choice does not change the results significantly. The exact parametrization of the learning rate schedule is particularly important with delayed updates. We follow the common practice of taking learning rates of the form ηt = α/√St + 1, where St is the appropriate learning rate statistic for the given algorithm, e.g., ˜Gbck 1:o(t) for HypBack or Pt s=1 g2 s for vanilla AdaGrad. In the analysis, we use G0 = m2L2 rather than G0 = 1; we believe G0 = 1 will generally be a better choice in practice, though we did not optimize this choice.7 When we optimize α, we choose the best setting from a grid {α0(1.25)i | i ∈N}, where α0 is an initial guess for each dataset. All figures give the average delay D on the x-axis. For Figure 1, for each dataset and algorithm, we optimized α in the zero delay (D = m = 0) case, and fixed this parameter as the average delay D increases. This leads to very bad performance for standard AdaGrad DA and GD as D gets large. In Figure 2, we optimized α individually for each delay level; we plot the accuracy as before, with the lower plot showing the optimal learning rate scaling α on a log-scale. The optimal learning rate scaling for GD and DA decrease by two orders of magnitude as the delays increase. However, even with this tuning they do not obtain the performance of AdaptiveRevision. The performance of AdaptiveRevision (and HypBack and HypFwd) is slightly improved by lowering the learning rate as delays increase, but the effect is comparatively very minor. As anticipated, the performance for AdaptiveRevision, HypBack, and HypFwd are closely grouped. AdaptiveRevision’s delay tolerance can lead to enormous speedups in practice. For example, the leftmost plot of Figure 2 shows that AdaptiveRevision achieves better accuracy with an update delay of 10,000 than AsyncAda-DA achieves with a delay of 1000. Because update delays are proportional to the number of Readers, this means that AdaptiveRevision can be used to train a model an order of magnitude faster than AsyncAda-DA, with no reduction in accuracy. This allows for much faster iteration when data sets are large and parallelism is cheap, which is the case in important real-world problems such as ad click-through rate prediction [14]. 4 Conclusions and Future Work We have demonstrated that adaptive tuning and revision of per-coordinate learning rates for distributed gradient descent can significantly improve accuracy as the update delays become large. The key algorithmic technique is maintaining a sum of gradients, which allows the adjustment of all learning rates for gradient updates that occurred between the current Update and its Read. The analysis method is novel, but is also somewhat indirect; an interesting open question is finding a general analysis framework for algorithms of this style. Ideally such an analysis would also remove the technical need for the InOrder assumption, and also allow for the analysis of AdaptiveRevision variants of OGD with Projection and Dual Averaging. 6We also ran experiments on the rcv1.binary training dataset (0.6×106 examples, 0.05×106 features) from Chang and Lin [20]; results were qualitatively very similar to those for the URL dataset. 7The main purpose of choosing a larger G0 in the theorems was to make the performance of HypBack and AdaptiveRevision provably close to that of HypFwd, even in the worst case. On real data, the performance of the algorithms will typically be close even with G0 = 1. 8 References [1] Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In ICML, 2003. [2] Tong Zhang. Solving large scale linear prediction problems using stochastic gradient descent algorithms. In ICML 2004, 2004. [3] L´eon Bottou and Olivier Bousquet. The tradeoffs of large scale learning. In Advances in Neural Information Processing Systems. 2008. [4] Shai Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends in Machine Learning, 2012. [5] Ofer Dekel, Ran Gilad-Bachrach, Ohad Shamir, and Lin Xiao. Optimal distributed online prediction using mini-batches. J. Mach. Learn. Res., 13(1), January 2012. [6] Peter Richt´arik and Martin Tak´aˇc. Parallel coordinate descent methods for big data optimization. arXiv:1212.0873 [math.OC], 2012. URL http://arxiv.org/abs/1212.0873. [7] Martin Tak´aˇc, Avleen Bijral, Peter Richt´arik, and Nati Srebro. Mini-batch primal and dual methods for SVMs. In Proceedings of the 30th International Conference on Machine Learning, 2013. [8] Daniel Hsu, Nikos Karampatziakis, John Langford, and Alexander J. Smola. Scaling Up Machine Learning, chapter Parallel Online Learning. Cambridge University Press, 2011. [9] John C. Duchi, Alekh Agarwal, and Martin J. Wainwright. Dual averaging for distributed optimization: Convergence analysis and network scaling. IEEE Trans. Automat. Contr., 57(3):592–606, 2012. [10] Benjamin Recht, Christopher Re, Stephen Wright, and Feng Niu. Hogwild: a lock-free approach to parallelizing stochastic gradient descent. In NIPS, 2011. [11] John C. Duchi, Michael I. Jordan, and H. Brendan McMahan. Estimation, optimization, and parallelism when data is sparse. In NIPS, 2013. [12] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. In COLT, 2010. [13] H. Brendan McMahan and Matthew Streeter. Adaptive bound optimization for online convex optimization. In COLT, 2010. [14] H. Brendan McMahan, Gary Holt, David Sculley, Michael Young, Dietmar Ebner, Julian Grady, Lan Nie, Todd Phillips, Eugene Davydov, Daniel Golovin, Sharat Chikkerur, Dan Liu, Martin Wattenberg, Arnar Mar Hrafnkelsson, Tom Boulos, and Jeremy Kubica. Ad click prediction: a view from the trenches. In KDD, 2013. [15] Thore Graepel, Joaquin Qui˜nonero Candela, Thomas Borchert, and Ralf Herbrich. Web-scale bayesian click-through rate prediction for sponsored search advertising in microsoft’s bing search engine. In ICML, 2010. [16] Jeffrey Dean, Greg S. Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Quoc V. Le, Mark Z. Mao, Marc’Aurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, and Andrew Y. Ng. Large scale distributed deep networks. In NIPS, 2012. [17] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to stochastic programming. SIAM J. on Optimization, 19(4):1574–1609, January 2009. ISSN 1052-6234. doi: 10.1137/070704277. [18] John Langford, Alex Smola, and Martin Zinkevich. Slow Learners are Fast. In Advances in Neural Information Processing Systems 22. 2009. [19] Justin Ma, Lawrence K. Saul, Stefan Savage, and Geoffrey M. Voelker. Identifying suspicious urls: An application of large-scale online learning. In Proceedings of the 26th Annual International Conference on Machine Learning, ICML ’09, 2009. [20] Chih-Chung Chang and Chih-Jen Lin. LIBSVM data sets. http://www.csie.ntu.edu.tw/˜cjlin/libsvmtools/ datasets/, 2010. [21] Peter Auer, Nicol`o Cesa-Bianchi, and Claudio Gentile. Adaptive and self-confident on-line learning algorithms. Journal of Computer and System Sciences, 2002. 9
|
2014
|
143
|
5,229
|
Sequential Monte Carlo for Graphical Models Christian A. Naesseth Div. of Automatic Control Link¨oping University Link¨oping, Sweden chran60@isy.liu.se Fredrik Lindsten Dept. of Engineering The University of Cambridge Cambridge, UK fsml2@cam.ac.uk Thomas B. Sch¨on Dept. of Information Technology Uppsala University Uppsala, Sweden thomas.schon@it.uu.se Abstract We propose a new framework for how to use sequential Monte Carlo (SMC) algorithms for inference in probabilistic graphical models (PGM). Via a sequential decomposition of the PGM we find a sequence of auxiliary distributions defined on a monotonically increasing sequence of probability spaces. By targeting these auxiliary distributions using SMC we are able to approximate the full joint distribution defined by the PGM. One of the key merits of the SMC sampler is that it provides an unbiased estimate of the partition function of the model. We also show how it can be used within a particle Markov chain Monte Carlo framework in order to construct high-dimensional block-sampling algorithms for general PGMs. 1 Introduction Bayesian inference in statistical models involving a large number of latent random variables is in general a difficult problem. This renders inference methods that are capable of efficiently utilizing structure important tools. Probabilistic Graphical Models (PGMs) are an intuitive and useful way to represent and make use of underlying structure in probability distributions with many interesting areas of applications [1]. Our main contribution is a new framework for constructing non-standard (auxiliary) target distributions of PGMs, utilizing what we call a sequential decomposition of the underlying factor graph, to be targeted by a sequential Monte Carlo (SMC) sampler. This construction enables us to make use of SMC methods developed and studied over the last 20 years, to approximate the full joint distribution defined by the PGM. As a byproduct, the SMC algorithm provides an unbiased estimate of the partition function (normalization constant). We show how the proposed method can be used as an alternative to standard methods such as the Annealed Importance Sampling (AIS) proposed in [2], when estimating the partition function. We also make use of the proposed SMC algorithm to design efficient, high-dimensional MCMC kernels for the latent variables of the PGM in a particle MCMC framework. This enables inference about the latent variables as well as learning of unknown model parameters in an MCMC setting. During the last decade there has been substantial work on how to leverage SMC algorithms [3] to solve inference problems in PGMs. The first approaches were PAMPAS [4] and nonparametric belief propagation by Sudderth et al. [5, 6]. Since then, several different variants and refinements have been proposed by e.g. Briers et al. [7], Ihler and Mcallester [8], Frank et al. [9]. They all rely on various particle approximations of messages sent in a loopy belief propagation algorithm. This means that in general, even in the limit of Monte Carlo samples, they are approximate methods. Compared to these approaches our proposed methods are consistent and provide an unbiased estimate of the normalization constant as a by-product. Another branch of SMC-based methods for graphical models has been suggested by Hamze and de Freitas [10]. Their method builds on the SMC sampler by Del Moral et al. [11], where the 1 initial target is a spanning tree of the original graph and subsequent steps add edges according to an annealing schedule. Everitt [12] extends these ideas to learn parameters using particle MCMC [13]. Yet another take is provided by Carbonetto and de Freitas [14], where an SMC sampler is combined with mean field approximations. Compared to these methods we can handle both non-Gaussian and/or non-discrete interactions between variables and there is no requirement to perform MCMC steps within each SMC step. The left-right methods described by Wallach et al. [15] and extended by Buntine [16] to estimate the likelihood of held-out documents in topic models are somewhat related in that they are SMCinspired. However, these are not actual SMC algorithms and they do not produce an unbiased estimate of the partition function for finite sample set. On the other hand, a particle learning based approach was recently proposed by Scott and Baldridge [17] and it can be viewed as a special case of our method for this specific type of model. 2 Graphical models A graphical model is a probabilistic model which factorizes according to the structure of an underlying graph G = {V, E}, with vertex set V and edge set E. By this we mean that the joint probability density function (PDF) of the set of random variables indexed by V, XV := {x1, . . . , x|V|}, can be represented as a product of factors over the cliques of the graph: p(XV) = 1 Z Y C∈C ψC(XC), (1) where C is the set of cliques in G, ψC is the factor for clique C, and Z = R Q C∈C ψC(xC)dXV is the partition function. x1 x2 x3 x4 x5 (a) Undirected graph. x1 ψ1 x2 ψ2 x3 x4 ψ3 ψ4 x5 ψ5 (b) Factor graph. Figure 1: Undirected PGM and a corresponding factor graph. We will frequently use the notation XI = S i∈I{xi} for some subset I ⊆{1, . . . , |V|} and we write XI for the range of XI (i.e., XI ∈XI). To make the interactions between the random variables explicit we define a factor graph F = {V, Ψ, E′} corresponding to G. The factor graph consists of two types of vertices, the original set of random variables XV and the factors Ψ = {ψC : C ∈C}. The edge set E′ consists only of edges from variables to factors. In Figure 1a we show a simple toy example of an undirected graphical model, and one possible corresponding factor graph, Figure 1b, making the dependencies explicit. Both directed and undirected graphs can be represented by factor graphs. 3 Sequential Monte Carlo In this section we propose a way to sequentially decompose a graphical model which we then make use of to design an SMC algorithm for the PGM. 3.1 Sequential decomposition of graphical models SMC methods can be used to approximate a sequence of probability distributions on a sequence of probability spaces of increasing dimension. This is done by recursively updating a set of samples— or particles—with corresponding nonnegative importance weights. The typical scenario is that of state inference in state-space models, where the probability distributions targeted by the SMC sampler are the joint smoothing distributions of a sequence of latent states conditionally on a sequence of observations; see e.g., Doucet and Johansen [18] for applications of this type. However, SMC is not limited to these cases and it is applicable to a much wider class of models. To be able to use SMC for inference in PGMs we have to define a sequence of target distributions. However, these target distributions do not have to be marginal distributions under p(XV). Indeed, as long as the sequence of target distributions is constructed in such a way that, at some final iteration, we recover p(XV), all the intermediate target distributions may be chosen quite arbitrarily. 2 x5 ψ5 (a) eγ1(XL1) x3 ψ3 x5 ψ5 (b) eγ2(XL2) x3 ψ3 ψ4 x5 x4 ψ5 (c) eγ3(XL3) ψ4 x4 ψ2 x2 x3 ψ3 x5 ψ5 (d) eγ4(XL4) x1 ψ1 x2 ψ2 x3 x4 ψ3 ψ4 x5 ψ5 (e) eγ5(XL5) x1 ψ1 x2 (f) eγ1(XL1) x1 ψ1 x2 ψ2 x3 x4 (g) eγ2(XL2) x1 ψ1 x2 ψ2 x3 x4 ψ3 ψ4 x5 ψ5 (h) eγ3(XL3) Figure 2: Examples of five- (top) and three-step (bottom) sequential decomposition of Figure 1. This is key to our development, since it lets us use the structure of the PGM to define a sequence of intermediate target distributions for the sampler. We do this by a so called sequential decomposition of the graphical model. This amounts to simply adding factors to the target distribution, from the product of factors in (1), at each step of the algorithm and iterate until all the factors have been added. Constructing an artificial sequence of intermediate target distributions for an SMC sampler is a simple, albeit underutilized, idea as it opens up for using SMC samplers for inference in a wide range of probabilistic models; see e.g., Bouchard-Cˆot´e et al. [19], Del Moral et al. [11] for a few applications of this approach. Given a graph G with cliques C, let {ψk}K k=1 be a sequence of factors defined as follows ψk(XIk) = Q C∈Ck ψC(XC), where Ck ⊂C are chosen such that SK k=1 Ck = C and Ci ∩Cj = ∅, i ̸= j, and where Ik ⊆{1, . . . , |V|} is the index set of the variables in the domain of ψk, Ik = S C∈Ck C. We emphasize that the cliques in C need not be maximal. In fact even auxiliary factors may be introduced to allow for e.g. annealing between distributions. It follows that the PDF in (1) can be written as p(XV) = 1 Z QK k=1 ψk(XIk). Principally, the choices and the ordering of the Ck’s is arbitrary, but in practice it will affect the performance of the proposed sampler. However, in many common PGMs an intuitive ordering can be deduced from the structure of the model, see Section 5. The sequential decomposition of the PGM is then based on the auxiliary quantities eγk(XLk) := Qk ℓ=1 ψℓ(XIℓ), with Lk := Sk ℓ=1 Iℓ, for k ∈{1, . . . , K}. By construction, LK = V and the joint PDF p(XLK) will be proportional to eγK(XLK). Consequently, by using eγk(XLk) as the basis for the target sequence for an SMC sampler, we will obtain the correct target distribution at iteration K. However, a further requirement for this to be possible is that all the functions in the sequence are normalizable. For many graphical models this is indeed the case, and then we can use eγk(XLk), k = 1 to K, directly as our sequence of intermediate target densities. If, however, R eγk(XLk)dXLk = ∞for some k < K, an easy remedy is to modify the target density to ensure normalizability. This is done by setting γk(XLk) = eγk(XLk)qk(XLk), where qk(XLk) is choosen so that R γk(XLk)dXLk < ∞. We set qK(XLK) ≡1 to make sure that γK(XLK) ∝p(XLk). Note that the integral R γk(XLk)dXLk need not be computed explicitly, as long as it can be established that it is finite. With this modification we obtain a sequence of unnormalized intermediate target densities for the SMC sampler as γ1(XL1) = q1(XL1)ψ1(XL1) and γk(XLk) = γk−1(XLk−1) qk(XLk ) qk−1(XLk−1)ψk(XIk) for k = 2, . . . , K. The corresponding normalized PDFs are given by ¯γk(XLk) = γk(XLk)/Zk, where Zk = R γk(XLk)dXLk. Figure 2 shows two examples of possible subgraphs when applying the decomposition, in two different ways, to the factor graph example in Figure 1. 3.2 Sequential Monte Carlo for PGMs At iteration k, the SMC sampler approximates the target distribution ¯γk by a collection of weighted particles {Xi Lk, wi k}N i=1. These samples define an empirical point-mass approximation of the target distribution. In what follows, we shall use the notation ξk := XIk\Lk−1 to refer to the collection of random variables that are in the domain of γk, but not in the domain of γk−1. This corresponds to the collection of random variables, with which the particles are augmented at each iteration. Initially, ¯γ1 is approximated by importance sampling. We proceed inductively and assume that we have at hand a weighted sample {Xi Lk−1, wi k−1}N i=1, approximating ¯γk−1(XLk−1). This sample is 3 propagated forward by simulating, conditionally independently given the particle generation up to iteration k −1, and drawing an ancestor index ai k with P(ai k = j) ∝νj k−1wj k−1, j = 1, . . . , N, where νi k−1 := νk−1(Xi Lk−1)—known as adjustment multiplier weights—are used in the auxiliary SMC framework to adapt the resampling procedure to the current target density ¯γk [20]. Given the ancestor indices, we simulate particle increments {ξi k}N i=1 from a proposal density ξi k ∼rk(·|Xai k Lk−1) on XIk\Lk−1, and augment the particles as Xi Lk := Xai k Lk−1 ∪ξi k. After having performed this procedure for the N ancestor indices and particles, they are assigned importance weights wi k = Wk(Xi Lk). The weight function, for k ≥2, is given by Wk(XLk) = γk(XLk) γk−1(XLk−1)νk−1(XLk−1)rk(ξk|XLk−1), (2) where, again, we write ξk = XIk\Lk−1. We give a summary of the SMC method in Algorithm 1. Algorithm 1 Sequential Monte Carlo (SMC) Perform each step for i = 1, . . . , N. Sample Xi L1 ∼r1(·). Set wi 1 = γ1(Xi L1)/r1(Xi L1). for k = 2 to K do Sample ai k according to P(ai k = j) = νj k−1wj k−1 P l νl k−1wl k−1 . Sample ξi k ∼rk(·|Xai k Lk−1) and set Xi Lk = Xai k Lk−1 ∪ξi k. Set wi k = Wk(Xi Lk). end for In the case that Ik \ Lk−1 = ∅for some k, resampling and propagation steps are superfluous. The easiest way to handle this is to simply skip these steps and directly compute importance weights. An alternative approach is to bridge the two target distributions ¯γk−1 and ¯γk similarly to Del Moral et al. [11]. Since the proposed sampler for PGMs falls within a general SMC framework, standard convergence analysis applies. See e.g., Del Moral [21] for a comprehensive collection of theoretical results on consistency, central limit theorems, and non-asymptotic bounds for SMC samplers. The choices of proposal density and adjustment multipliers can quite significantly affect the performance of the sampler. It follows from (2) that Wk(XLk) ≡1 if we choose νk−1(XLk−1) = R γk(XLk ) γk−1(XLk−1)dξk and rk(ξk|XLk−1) = γk(XLk ) νk−1(XLk−1)γk−1(XLk−1). In this case, the SMC sampler is said to be fully adapted. 3.3 Estimating the partition function The partition function of a graphical model is a very interesting quantity in many applications. Examples include likelihood-based learning of the parameters of the PGM, statistical mechanics where it is related to the free energy of a system of objects, and information theory where it is related to the capacity of a channel. However, as stated by Hamze and de Freitas [10], estimating the partition function of a loopy graphical model is a “notoriously difficult” task. Indeed, even for discrete problems simple and accurate estimators have proved to be elusive, and MCMC methods do not provide any simple way of computing the partition function. On the contrary, SMC provides a straightforward estimator of the normalizing constant (i.e. the partition function), given as a byproduct of the sampler according to, bZN k := 1 N N X i=1 wi k ! (k−1 Y ℓ=1 1 N N X i=1 νi ℓwi ℓ ) . (3) It may not be obvious to see why (3) is a natural estimator of the normalizing constant Zk. However, a by now well known result is that this SMC-based estimator is unbiased. This result is due to Del Moral [21, Proposition 7.4.1] and, for the special case of inference in state-space models, it has also been established by Pitt et al. [22]. For completeness we also offer a proof using the present notation in the supplementary material. Since ZK = Z, we thus obtain an estimator of the partition function of the PGM at iteration K of the sampler. Besides from being unbiased, this estimator is also consistent and asymptotically normal; see Del Moral [21]. 4 In [23] we have studied a specific information theoretic application (computing the capacity of a two-dimensional channel) and inspired by the algorithm proposed here we were able to design a sampler with significantly improved performance compared to the previous state-of-the-art. 4 Particle MCMC and partial blocking Two shortcomings of SMC are: (i) it does not solve the parameter learning problem, and (ii) the quality of the estimates of marginal distributions p(XLk) = R ¯γK(XLK)dXLK\Lk deteriorates for k ≪K due to the fact that the particle trajectories degenerate as the particle system evolves (see e.g., [18]). Many methods have been proposed in the literature to address these problems; see e.g. [24] and the references therein. Among these, the recently proposed particle MCMC (PMCMC) framework [13], plays a prominent role. PMCMC algorithms make use of SMC to construct (in general) high-dimensional Markov kernels that can be used within MCMC. These methods were shown by [13] to be exact, in the sense that the apparent particle approximation in the construction of the kernel does not change its invariant distribution. This property holds for any number of particles N ≥2, i.e., PMCMC does not rely on asymptotics in N for correctness. The fact that the SMC sampler for PGMs presented in Algorithm 1 fits under a general SMC umbrella implies that we can also straightforwardly make use of this algorithm within PMCMC. This allows us to construct a Markov kernel (indexed by the number of particles N) on the space of latent variables of the PGM, PN(X′ LK, dXLK), which leaves the full joint distribution p(XV) invariant. We do not dwell on the details of the implementation here, but refer instead to [13] for the general setup and [25] for the specific method that we have used in the numerical illustration in Section 5. PMCMC methods enable blocking of the latent variables of the PGM in an MCMC scheme. Simulating all the latent variables XLK jointly is useful since, in general, this will reduce the autocorrelation when compared to simulating the variables xj one at a time [26]. However, it is also possible to employ PMCMC to construct an algorithm in between these two extremes, a strategy that we believe will be particularly useful in the context of PGMs. Let {Vm, m ∈{1, . . . , M}} be a partition of V. Ideally, a Gibbs sampler for the joint distribution p(XV) could then be constructed by simulating, using a systematic or a random scan, from the conditional distributions p(XVm|XV\Vm) for m = 1, . . . , M. (4) We refer to this strategy as partial blocking, since it amounts to simulating a subset of the variables, but not necessarily all of them, jointly. Note that, if we set M = |V| and Vm = {m} for m = 1, . . . , M, this scheme reduces to a standard Gibbs sampler. On the other extreme, with M = 1 and V1 = V, we get a fully blocked sampler which targets directly the full joint distribution p(XV). From (1) it follows that the conditional distributions (4) can be expressed as p(XVm|XV\Vm) ∝ Y C∈Cm ψC(XC), (5) where Cm = {C ∈C : C ∩Vm ̸= ∅}. While it is in general not possible to sample exactly from these conditionals, we can make use of PMCMC to facilitate a partially blocked Gibbs sampler for a PGM. By letting p(XVm|XV\Vm) be the target distribution for the SMC sampler of Algorithm 1, we can construct a PMCMC kernel P m N that leaves the conditional distribution (5) invariant. This suggests the following approach: with X′ V being the current state of the Markov chain, update block m by sampling XVm ∼P m N ⟨X′ V\Vm⟩(X′ Vm, ·). (6) Here we have indicated explicitly in the notation that the PMCMC kernel for the conditional distribution p(XVm|XV\Vm) depends on both X′ V\Vm (which is considered to be fixed throughout the sampling procedure) and on X′ Vm (which defines the current state of the PMCMC procedure). As mentioned above, while being generally applicable, we believe that partial blocking of PMCMC samplers will be particularly useful for PGMs. The reason is that we can choose the vertex sets Vm for m = 1, . . . , M in order to facilitate simple sequential decompositions of the induced subgraphs. For instance, it is always possible to choose the partition in such a way that all the induced subgraphs are chains. 5 5 Experiments In this section we evaluate the proposed SMC sampler on three examples to illustrate the merits of our approach. Additional details and results are available in the supplementary material and code to reproduce results can be found in [27]. We first consider an example from statistical mechanics, the classical XY model, to illustrate the impact of the sequential decomposition. Furthermore, we profile our algorithm with the “gold standard” AIS [2] and Annealed Sequential Importance Resampling (ASIR1) [11]. In the second example we apply the proposed method to the problem of scoring of topic models, and finally we consider a simple toy model, a Gaussian Markov random field (MRF), which illustrates that our proposed method has the potential to significantly decrease correlations between samples in an MCMC scheme. Furthermore, we provide an exact SMC-approximation of the tree-sampler by Hamze and de Freitas [28] and thereby extend the scope of this powerful method. 5.1 Classical XY model 10 4 10 5 10 −3 10 −2 10 −1 10 0 10 1 10 2 10 3 N MSE AIS SMC RND−N SMC SPIRAL SMC DIAG SMC L−R Figure 3: Mean-squared-errors for sample size N in the estimates of log Z for AIS and four different orderings in the proposed SMC framework. The classical XY model (see e.g. [29]) is a member in the family of n-vector models used in statistical mechanics. It can be seen as a generalization of the well known Ising model with a two-dimensional electromagnetic spin. The spin vector is described by its angle x ∈ (−π, π]. We will consider square lattices with periodic boundary conditions. The joint PDF of the classical XY model with equal interaction is given by p(XV) ∝eβ P (i,j)∈E cos(xi−xj), (7) where β denotes the inverse temperature. To evaluate the effect of different sequence orders on the accuracy of the estimates of the lognormalizing-constant log Z we ran several experiments on a 16 × 16 XY model with β = 1.1 (approximately the critical inverse temperature [30]). For simplicity we add one node at a time and all factors bridging this node with previously added nodes. Full adaptation in this case is possible due to the optimal proposal being a von Mises distribution. We show results for the following cases: Random neighbour (RND-N) First node selected randomly among all nodes, concurrent nodes selected randomly from the set of nodes with a neighbour in XLk−1. Diagonal (DIAG) Nodes added by traversing diagonally (45◦angle) from left to right. Spiral (SPIRAL) Nodes added spiralling in towards the middle from the edges. Left-Right (L-R) Nodes added by traversing the graph left to right, from top to bottom. We also give results of AIS with single-site-Gibbs updates and 1 000 annealing distributions linearly spaced from zero to one, starting from a uniform distribution (geometric spacing did not yield any improvement over linear spacing for this case). The “true value” was estimated using AIS with 10 000 intermediate distributions and 5 000 importance samples. We can see from the results in Figure 3 that designing a good sequential decomposition for the SMC sampler is important. However, the intuitive and fairly simple choice L-R does give very good results comparable to that of AIS. Furthermore, we consider a larger size of 64 × 64 and evaluate the performance of the L-R ordering compared to AIS and the ASIR method. Figure 4 displays box-plots of 10 independent runs. We set N = 105 for the proposed SMC sampler and then match the computational costs of AIS and ASIR with this computational budget. A fair amount of time was spent in tuning the AIS and ASIR algorithms; 10 000 linear annealing distributions seemed to give best performance in these cases. We can see that the L-R ordering gives results comparable to fairly well-tuned AIS and ASIR algorithms; the ordering of the methods depending on the temperature of the model. One option that does make the SMC algorithm interesting for these types of applications is that it can easily be parallelized over the particles, whereas AIS/ASIR has limited possibilities of parallel implementation over the (crucial) annealing steps. 1ASIR is a specific instance of the SMC sampler by [11], corresponding to AIS with the addition of resampling steps, but to avoid confusion with the proposed method we choose to refer to it as ASIR. 6 8063.95 8064 8064.05 8064.1 8064.15 AIS ASIR SMC L−R log(bZ) 1.05 1.0505 1.051 1.0515 1.052 x 10 4 AIS ASIR SMC L−R log(bZ) 1.4387 1.4389 1.4391 1.4393 1.4395 x 10 4 AIS ASIR SMC L−R log(bZ) Figure 4: The logarithm of the estimated partition function for the 64 × 64 XY model with inverse temperature 0.5 (left), 1.1 (middle) and 1.7 (right). 50 100 150 200 250 300 350 −92.5 −92 −91.5 −91 −90.5 N log(bZ) LRS SMC Exact (a) Small simulated example. LRS 1 LRS 2 SMC 1 SMC 2 −8780 −8764 −8748 −8732 −8716 −8700 log(bZ) (b) PMC. LRS 1 LRS 2 SMC 1 SMC 2 −1.356 −1.354 −1.352 −1.35 −1.348 x 10 4 log(bZ) (c) 20 newsgroups. Figure 6: Estimates of the log-likelihood of heldout documents for various datasets. 5.2 Likelihood estimation in topic models αm θ z1 w1 · · · zM wM Ψ Figure 5: LDA as graphical model. Topic models such as Latent Dirichlet Allocation (LDA) [31] are popular models for reasoning about large text corpora. Model evaluation is often conducted by computing the likelihood of held-out documents w.r.t. a learnt model. However, this is a challenging problem on its own—which has received much recent interest [15, 16, 17]—since it essentially corresponds to computing the partition function of a graphical model; see Figure 5. The SMC procedure of Algorithm 1 can used to solve this problem by defining a sequential decomposition of the graphical model. In particular, we consider the decomposition corresponding to first including the node θ and then, subsequently, introducing the nodes z1 to zM in any order. Interestingly, if we then make use of a Rao-Blackwellization over the variable θ, the SMC sampler of Algorithm 1 reduces exactly to a method that has previously been proposed for this specific problem [17]. In [17], the method is derived by reformulating the model in terms of its sufficient statistics and phrasing this as a particle learning problem; here we obtain the same procedure as a special case of the general SMC algorithm operating on the original model. We use the same data and learnt models as Wallach et al. [15], i.e. 20 newsgroups, and PubMed Central abstracts (PMC). We compare with the Left-Right-Sequential (LRS) sampler [16], which is an improvement over the method proposed by Wallach et al. [15]. Results on simulated and real data experiments are provided in Figure 6. For the simulated example (Figure 6a), we use a small model with 10 words and 4 topics to be able to compute the exact log-likelihood. We keep the number of particles in the SMC algorithm equal to the number of Gibbs steps in LRS; this means LRS is about an order-of-magnitude more computationally demanding than the SMC method. Despite the fact that the SMC sampler uses only about a tenth of the computational time of the LRS sampler, it performs significantly better in terms of estimator variance. The other two plots show results on real data with 10 held-out documents for each dataset. For a fixed number of Gibbs steps we choose the number of particles for each document to make the computational cost approximately equal. Run #2 has twice the number of particles/samples as in run #1. We show the mean of 10 runs and error-bars estimated 7 using bootstrapping with 10 000 samples. Computing the logarithm of ˆZ introduces a negative bias, which means larger values of log ˆZ typically implies more accurate results. The results on real data do not show the drastic improvement we see in the simulated example, which could be due to degeneracy problems for long documents. An interesting approach that could improve results would be to use an SMC algorithm tailored to discrete distributions, e.g. Fearnhead and Clifford [32]. 5.3 Gaussian MRF Finally, we consider a simple toy model to illustrate how the SMC sampler of Algorithm 1 can be incorporated in PMCMC sampling. We simulate data from a zero mean Gaussian 10 × 10 lattice MRF with observation and interaction standard deviations of σi = 1 and σij = 0.1 respectively. We use the proposed SMC algorithm together with the PMCMC method by Lindsten et al. [25]. We compare this with standard Gibbs sampling and the tree sampler by Hamze and de Freitas [28]. 0 50 100 150 200 250 300 0 0.2 0.4 0.6 0.8 1 Lag ACF Gibbs sampler PMCMC w. partial blocking Tree sampler PMCMC Figure 7: The empirical ACF for Gibbs sampling, PMCMC, PMCMC with partial blocking, and tree sampling. We use a moderate number of N = 50 particles in the PMCMC sampler (recall that it admits the correct invariant distribution for any N ≥2). In Figure 7 we can see the empirical autocorrelation funtions (ACF) centered around the true posterior mean for variable x82 (selected randomly from among XV; similar results hold for all the variables of the model). Due to the strong interaction between the latent variables, the samples generated by the standard Gibbs sampler are strongly correlated. Tree-sampling and PMCMC with partial blocking show nearly identical gains compared to Gibbs. This is interesting, since it suggest that simulating from the SMC-based PMCMC kernel can be almost as efficient as exact simulation, even using a moderate number of particles. Indeed, PMCMC with partial blocking can be viewed as an exact SMC-approximation of the tree sampler, extending the scope of tree-sampling beyond discrete and Gaussian models. The fully blocked PMCMC algorithm achieves the best ACF, dropping off to zero considerably faster than for the other methods. This is not surprising since this sampler simulates all the latent variables jointly which reduces the autocorrelation, in particular when the latent variables are strongly dependent. However, it should be noted that this method also has the highest computational cost per iteration. 6 Conclusion We have proposed a new framework for inference in PGMs using SMC and illustrated it on three examples. These examples show that it can be a viable alternative to standard methods used for inference and partition function estimation problems. An interesting avenue for future work is combining our proposed methods with AIS, to see if we can improve on both. Acknowledgments We would like to thank Iain Murray for his kind and very prompt help in providing the data for the LDA example. This work was supported by the projects: Learning of complex dynamical systems (Contract number: 637-2014-466) and Probabilistic modeling of dynamical systems (Contract number: 621-2013-5524), both funded by the Swedish Research Council. References [1] M. I. Jordan. Graphical models. Statistical Science, 19(1):140–155, 2004. [2] R. M Neal. Annealed importance sampling. Statistics and Computing, 11(2):125–139, 2001. [3] A. Doucet, N. De Freitas, N. Gordon, et al. Sequential Monte Carlo methods in practice. Springer New York, 2001. [4] M. Isard. PAMPAS: Real-valued graphical models for computer vision. In Proceedings of the conference on Computer Vision and Pattern Recognition (CVPR), Madison, WI, USA, June 2003. 8 [5] E. B. Sudderth, A. T. Ihler, W. T. Freeman, and A. S. Willsky. Nonparametric belief propagation. In Proceedings of the conference on Computer Vision and Pattern Recognition (CVPR), Madison, WI, USA, 2003. [6] E. B. Sudderth, A. T. Ihler, M. Isard, W. T. Freeman, and A. S. Willsky. Nonparametric belief propagation. Communications of the ACM, 53(10):95–103, 2010. [7] M. Briers, A. Doucet, and S. S. Singh. Sequential auxiliary particle belief propagation. In Proceedings of the 8th International Conference on Information Fusion, Philadelphia, PA, USA, 2005. [8] A. T. Ihler and D. A. Mcallester. Particle belief propagation. In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS), Clearwater Beach, FL, USA, 2009. [9] A. Frank, P. Smyth, and A. T. Ihler. Particle-based variational inference for continuous systems. In Advances in Neural Information Processing Systems (NIPS), pages 826–834, 2009. [10] F. Hamze and N. de Freitas. Hot coupling: a particle approach to inference and normalization on pairwise undirected graphs of arbitrary topology. In Advances in Neural Information Processing Systems (NIPS), 2005. [11] P. Del Moral, A. Doucet, and A. Jasra. Sequential Monte Carlo samplers. Journal of the Royal Statistical Society: Series B, 68(3):411–436, 2006. [12] R. G. Everitt. Bayesian parameter estimation for latent Markov random fields and social networks. Journal of Computational and Graphical Statistics, 21(4):940–960, 2012. [13] C. Andrieu, A. Doucet, and R. Holenstein. Particle Markov chain Monte Carlo methods. Journal of the Royal Statistical Society: Series B, 72(3):269–342, 2010. [14] P. Carbonetto and N. de Freitas. Conditional mean field. In Advances in Neural Information Processing Systems (NIPS) 19. MIT Press, 2007. [15] H. M Wallach, I. Murray, R. Salakhutdinov, and D. Mimno. Evaluation methods for topic models. In Proceedings of the 26th International Conference on Machine Learning, pages 1105–1112, 2009. [16] W. Buntine. Estimating likelihoods for topic models. In Advances in Machine Learning, pages 51–64. Springer, 2009. [17] G. S. Scott and J. Baldridge. A recursive estimate for the predictive likelihood in a topic model. In Proceedings of the 16th International Conference on Artificial Intelligence and Statistics (AISTATS), pages 1105–1112, Clearwater Beach, FL, USA, 2009. [18] A. Doucet and A. Johansen. A tutorial on particle filtering and smoothing: Fifteen years later. In D. Crisan and B. Rozovskii, editors, The Oxford Handbook of Nonlinear Filtering. Oxford University Press, 2011. [19] A. Bouchard-Cˆot´e, S. Sankararaman, and M. I. Jordan. Phylogenetic inference via sequential Monte Carlo. Systematic Biology, 61(4):579–593, 2012. [20] M. K. Pitt and N. Shephard. Filtering via simulation: Auxiliary particle filters. Journal of the American Statistical Association, 94(446):590–599, 1999. [21] P. Del Moral. Feynman-Kac Formulae - Genealogical and Interacting Particle Systems with Applications. Probability and its Applications. Springer, 2004. [22] M. K. Pitt, R. S. Silva, P. Giordani, and R. Kohn. On some properties of Markov chain Monte Carlo simulation methods based on the particle filter. Journal of Econometrics, 171:134–151, 2012. [23] C. A. Naesseth, F. Lindsten, and T. B. Sch¨on. Capacity estimation of two-dimensional channels using sequential Monte Carlo. In Proceedings of the IEEE Information Theory Workshop (ITW), Hobart, Tasmania, Australia, November 2014. [24] F. Lindsten and T. B. Sch¨on. Backward simulation methods for Monte Carlo statistical inference. Foundations and Trends in Machine Learning, 6(1):1–143, 2013. [25] F. Lindsten, M. I. Jordan, and T. B. Sch¨on. Particle Gibbs with ancestor sampling. Journal of Machine Learning Research, 15:2145–2184, june 2014. [26] C. P. Robert and G. Casella. Monte Carlo statistical methods. Springer New York, 2004. [27] C. A. Naesseth, F. Lindsten, and T. B. Sch¨on. smc-pgm, 2014. URL http://dx.doi.org/10. 5281/zenodo.11947. [28] F. Hamze and N. de Freitas. From fields to trees. In Proceedings of the 20th conference on Uncertainty in artificial intelligence (UAI), Banff, Canada, July 2004. [29] J. M. Kosterlitz and D. J. Thouless. Ordering, metastability and phase transitions in two-dimensional systems. J of Physics C: Solid State Physics, 6(7):1181, 1973. [30] Y. Tomita and Y. Okabe. Probability-changing cluster algorithm for two-dimensional XY and clock models. Physical Review B: Condensed Matter and Materials Physics, 65:184405, 2002. [31] David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993–1022, March 2003. [32] Paul Fearnhead and Peter Clifford. On-line inference for hidden markov models via particle filters. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 65(4):887–899, 2003. 9
|
2014
|
144
|
5,230
|
Learning Time-Varying Coverage Functions Nan Du†, Yingyu Liang‡, Maria-Florina Balcan⋄, Le Song† †College of Computing, Georgia Institute of Technology ‡Department of Computer Science, Princeton University ⋄School of Computer Science, Carnegie Mellon University dunan@gatech.edu,yingyul@cs.princeton.edu ninamf@cs.cmu.edu,lsong@cc.gatech.edu Abstract Coverage functions are an important class of discrete functions that capture the law of diminishing returns arising naturally from applications in social network analysis, machine learning, and algorithmic game theory. In this paper, we propose a new problem of learning time-varying coverage functions, and develop a novel parametrization of these functions using random features. Based on the connection between time-varying coverage functions and counting processes, we also propose an efficient parameter learning algorithm based on likelihood maximization, and provide a sample complexity analysis. We applied our algorithm to the influence function estimation problem in information diffusion in social networks, and show that with few assumptions about the diffusion processes, our algorithm is able to estimate influence significantly more accurately than existing approaches on both synthetic and real world data. 1 Introduction Coverage functions are a special class of the more general submodular functions which play important role in combinatorial optimization with many interesting applications in social network analysis [1], machine learning [2], economics and algorithmic game theory [3], etc. A particularly important example of coverage functions in practice is the influence function of users in information diffusion modeling [1] — news spreads across social networks by word-of-mouth and a set of influential sources can collectively trigger a large number of follow-ups. Another example of coverage functions is the valuation functions of customers in economics and game theory [3] — customers are thought to have certain requirements and the items being bundled and offered fulfill certain subsets of these demands. Theoretically, it is usually assumed that users’ influence or customers’ valuation are known in advance as an oracle. In practice, however, these functions must be learned. For example, given past traces of information spreading in social networks, a social platform host would like to estimate how many follow-ups a set of users can trigger. Or, given past data of customer reactions to different bundles, a retailer would like to estimate how likely customer would respond to new packages of goods. Learning such combinatorial functions has attracted many recent research efforts from both theoretical and practical sides (e.g., [4, 5, 6, 7, 8]), many of which show that coverage functions can be learned from just polynomial number of samples. However, the prior work has widely ignored an important dynamic aspect of the coverage functions. For instance, information spreading is a dynamic process in social networks, and the number of follow-ups of a fixed set of sources can increase as observation time increases. A bundle of items or features offered to customers may trigger a sequence of customer actions over time. These real world problems inspire and motivate us to consider a novel time-varying coverage function, f(S, t), which is a coverage function of the set S when we fix a time t, and a continuous monotonic function of time t when we fix a set S. While learning time-varying combinatorial structures has been ex1 plored in graphical model setting (e.g., [9, 10]), as far as we are aware of, learning of time-varying coverage function has not been addressed in the literature. Furthermore, we are interested in estimating the entire function of t, rather than just treating the time t as a discrete index and learning the function value at a small number of discrete points. From this perspective, our formulation is the generalization of the most recent work [8] with even less assumptions about the data used to learn the model. Generally, we assume that the historical data are provided in pairs of a set and a collection of timestamps when caused events by the set occur. Hence, such a collection of temporal events associated with a particular set Si can be modeled principally by a counting process Ni(t), t ⩾0 which is a stochastic process with values that are positive, integer, and increasing along time [11]. For instance, in the information diffusion setting of online social networks, given a set of earlier adopters of some new product, Ni(t) models the time sequence of all triggered events of the followers, where each jump in the process records the timing tij of an action. In the economics and game theory setting, the counting process Ni(t) records the number of actions a customer has taken over time given a particular bundled offer. This essentially raises an interesting question of how to estimate the time-varying coverage function from the angle of counting processes. We thus propose a novel formulation which builds a connection between the two by modeling the cumulative intensity function of a counting process as a time-varying coverage function. The key idea is to parametrize the intensity function as a weighted combination of random kernel functions. We then develop an efficient learning algorithm TCOVERAGELEARNER to estimate the parameters of the function using maximum likelihood approach. We show that our algorithm can provably learn the time-varying coverage function using only polynomial number of samples. Finally, we validate TCOVERAGELEARNER on both influence estimation and maximization problems by using cascade data from information diffusion. We show that our method performs significantly better than alternatives with little prior knowledge about the dynamics of the actual underlying diffusion processes. 2 Time-Varying Coverage Function We will first give a formal definition of the time-varying coverage function, and then explain its additional properties in details. Definition. Let U be a (potentially uncountable) domain. We endow U with some σ-algebra A and denote a probability distribution on U by P. A coverage function is a combinatorial function over a finite set V of items, defined as f(S) := Z · P [ s∈S Us , for all S ∈2V, (1) where Us ⊂U is the subset of domain U covered by item s ∈V, and Z is the additional normalization constant. For time-varying coverage functions, we let the size of the subset Us to grow monotonically over time, that is Us(t) ⊆Us(τ), for all t ⩽τ and s ∈V, (2) which results in a combinatorial temporal function f(S, t) = Z · P [ s∈S Us(t) , for all S ∈2V. (3) In this paper, we assume that f(S, t) is smooth and continuous, and its first order derivative with respect to time, f ′(S, t), is also smooth and continuous. Representation. We now show that a time-varying coverage function, f(S, t), can be represented as an expectation over random functions based on multidimensional step basis functions. Since Us(t) is varying over time, we can associate each u ∈U with a |V|-dimensional vector τu of change points. In particular, the s-th coordinate of τu records the time that source node s covers u. Let τ to be a random variable obtained by sampling u according to P and setting τ = τu. Note that given all τu we can compute f(S, t); now we claim that the distribution of τ is sufficient. We first introduce some notations. Based on τu we define a |V|-dimensional step function ru(t) : R+ 7→{0, 1}|V| , where the s-th dimension of ru(t) is 1 if u is covered by the set Us(t) at time t, and 0 otherwise. To emphasize the dependence of the function ru(t) on τu, we will also write ru(t) as ru(t|τu). We denote the indicator vector of a set S by χS ∈{0, 1}|V| where the s-th dimension of χS is 1 if s ∈S, and 0 otherwise. Then u ∈U is covered by S s∈S Us(t) at time t if χ⊤ S ru(t) ⩾1. 2 Lemma 1. There exists a distribution Q(τ) over the vector of change points τ, such that the timevarying coverage function can be represented as f(S, t) = Z · Eτ∼Q(τ) φ(χ⊤ S r(t|τ)) (4) where φ(x) := min {x, 1}, and r(t|τ) is a multidimensional step function parameterized by τ. Proof. Let US := S s∈S Us(t). By definition (3), we have the following integral representation f(S, t) = Z · Z U I {u ∈US} dP(u) = Z · Z U φ(χ⊤ S ru(t)) dP(u) = Z · Eu∼P(u) φ(χ⊤ S ru(t)) . We can define the set of u having the same τ as Uτ := {u ∈U | τu = τ} and define a distribution over τ as dQ(τ) := R Uτ dP(u). Then the integral representation of f(S, t) can be rewritten as Z · Eu∼P(u) φ(χ⊤ S ru(t)) = Z · Eτ∼Q(τ) φ(χ⊤ S r(t|τ)) , which proves the lemma. 3 Model for Observations In general, we assume that the input data are provided in the form of pairs, (Si, Ni(t)), where Si is a set, and Ni(t) is a counting process in which each jump of Ni(t) records the timing of an event. We first give a brief overview of a counting process [11] and then motivate our model in details. Counting Process. Formally, a counting process {N(t), t ⩾0} is any nonnegative, integer-valued stochastic process such that N(t′) ⩽N(t) whenever t′ ⩽t and N(0) = 0. The most common use of a counting process is to count the number of occurrences of temporal events happening along time, so the index set is usually taken to be the nonnegative real numbers R+. A counting process is a submartingale: E[N(t) | Ht′] ⩾N(t′) for all t > t′ where Ht′ denotes the history up to time t′. By Doob-Meyer theorem [11], N(t) has the unique decomposition: N(t) = Λ(t) + M(t) (5) where Λ(t) is a nondecreasing predictable process called the compensator (or cumulative intensity), and M(t) is a mean zero martingale. Since E[dM(t) | Ht−] = 0, where dM(t) is the increment of M(t) over a small time interval [t, t + dt), and Ht−is the history until just before time t, E[dN(t) | Ht−] = dΛ(t) := a(t) dt (6) where a(t) is called the intensity of a counting process. Model formulation. We assume that the cumulative intensity of the counting process is modeled by a time-varying coverage function, i.e., the observation pair (Si, Ni(t)) is generated by Ni(t) = f(Si, t) + Mi(t) (7) in the time window [0, T] for some T > 0, and df(S, t) = a(S, t)dt. In other words, the timevarying coverage function controls the propensity of occurring events over time. Specifically, for a fixed set Si, as time t increases, the cumulative number of events observed grows accordingly for that f(Si, t) is a continuous monotonic function over time; for a given time t, as the set Si changes to another set Sj, the amount of coverage over domain U may change and hence can result in a different cumulative intensity. This abstract model can be mapped to real world applications. In the information diffusion context, for a fixed set of sources Si, as time t increases, the number of influenced nodes in the social network tends to increase; for a given time t, if we change the sources to Sj, the number of influenced nodes may be different depending on how influential the sources are. In the economics and game theory context, for a fixed bundle of offers Si, as time t increases, it is more likely that the merchant will observe the customers’ actions in response to the offers; even at the same time t, different bundles of offers, Si and Sj, may have very different ability to drive the customers’ actions. Compared to a regression model yi = g(Si) + ϵi with i.i.d. input data (Si, yi), our model outputs a special random function over time, that is, a counting process Ni(t) with the noise being a zero mean martingale Mi(t). In contrast to functional regression models, our model exploits much more interesting structures of the problem. For instance, the random function representation in the last section can be used to parametrize the model. Such special structure of the counting process allows us to estimate the parameter of our model using maximum likelihood approach efficiently, and the martingale noise enables us to use exponential concentration inequality in analyzing our algorithm. 3 4 Parametrization Based on the following two mild assumptions, we will show how to parametrize the intensity function as a weighted combination of random kernel functions, learn the parameters by maximum likelihood estimation, and eventually derive a sample complexity. (A1) a(S, t) is smooth and bounded on [0, T]: 0 < amin ⩽a ⩽amax < ∞, and ¨a := d2a/dt2 is absolutely continuous with R ¨a(t)dt < ∞. (A2) There is a known distribution Q′(τ) and a constant C with Q′(τ)/C ⩽Q(τ) ⩽CQ′(τ). Kernel Smoothing To facilitate our finite dimensional parameterization, we first convolve the intensity function with K(t) = k(t/σ)/σ where σ is the bandwidth parameter and k is a kernel function (such as the Gaussian RBF kernel k(t) = e−t2/2/ √ 2π) with 0 ⩽k(t) ⩽κmax, Z k(t) dt = 1, Z t k(t) dt = 0, and σ2 k := Z t2k(t) dt < ∞. (8) The convolution results in a smoothed intensity aK(S, t) = K(t) ⋆(df(S, t)/dt) = d(K(t) ⋆ Λ(S, t))/dt. By the property of convolution and exchanging derivative with integral, we have that aK(S, t) = d(Z · Eτ∼Q(τ)[K(t) ⋆φ(χ⊤ S r(t|τ)])/dt by definition of f(·) = Z · Eτ∼Q(τ) d(K(t) ⋆φ(χ⊤ S r(t|τ))/dt exchange derivative and integral = Z · Eτ∼Q(τ) [K(t) ⋆δ(t −t(S, r)] by property of convolution and function φ(·) = Z · Eτ∼Q(τ) [K(t −t(S, τ))] by definition of δ(·) where t(S, τ) is the time when function φ(χ⊤ S r(t|τ)) jumps from 0 to 1. If we choose small enough kernel bandwidth, aK only incurs a small bias from a. But the smoothed intensity still results in infinite number of parameters, due to the unknown distribution Q(τ). To address this problem, we design the following random approximation with finite number of parameters. Random Function Approximation The key idea is to sample a collection of W random change points τ from a known distribution Q′(τ) which can be different from Q(τ). If Q′(τ) is not very far way from Q(τ), the random approximation will be close to aK, and thus close to a. More specifically, we will denote the space of weighted combination of W random kernel function by A = ( aK w(S, t) = W X i=1 wi K(t −t(S, τi)) : w ⩾0, Z C ⩽∥w∥1 ⩽ZC ) , {τi} i.i.d. ∼Q′(τ). (9) Lemma 2. If W = ˜O(Z2/(ϵσ)2), then with probability ⩾1 −δ, there exists an ea ∈A such that ESEt (a(S, t) −ea(S, t))2 := ES∼P(S) R T 0 (a(S, t) −ea(S, t))2 dt/T = O(ϵ2 + σ4). The lemma then suggests to set the kernel bandwidth σ = O(√ϵ) to get O(ϵ2) approximation error. 5 Learning Algorithm We develop a learning algorithm, referred to as TCOVERAGELEARNER, to estimate the parameters of aK w(S, t) by maximizing the joint likelihood of all observed events based on convex optimization techniques as follows. Maximum Likelihood Estimation Instead of directly estimating the time-varying coverage function, which is the cumulative intensity function of the counting process, we turn to estimate the intensity function a(S, t) = ∂Λ(S, t)/∂t. Given m i.i.d. counting processes, Dm := {(S1, N1(t)), . . . , (Sm, Nm(t))} up to observation time T, the log-likelihood of the dataset is [11] ℓ(Dm|a) = m X i=1 (Z T 0 {log a(Si, t)} dNi(t) − Z T 0 a(Si, t) dt ) . (10) Maximizing the log-likelihood with respect to the intensity function a(S, t) then gives us the estimation ba(S, t). The W-term random kernel function approximation reduces a function optimization problem to a finite dimensional optimization problem, while incurring only small bias in the estimated function. 4 Algorithm 1 TCOVERAGELEARNER INPUT : {(Si, Ni(t))} , i = 1, . . . , m; Sample W random features τ1, . . . , τW from Q′(τ); Compute {t(Si, τw)} , {gi} , {k(tij)} , i ∈{1, . . . , m} , w = 1, . . . , W, tij < T; Initialize w0 ∈Ω= {w ⩾0, ∥w∥1 ⩽1}; Apply projected quasi-newton algorithm [12] to solve 11; OUTPUT : aK w(S, t) = PW i=1 wi K(t −t(S, τi)) Convex Optimization. By plugging the parametrization aK w(S, t) (9) into the log-likelihood (10), we formulate the optimization problem as : min w m X i=1 w⊤gi − X tij<T log w⊤k(tij) subject to w ⩾0, ∥w∥1 ⩽1, (11) where we define gik = Z T 0 K (t −t(Si, τk)) dt and kl(tij) = K(tij −t(Si, τl)), (12) tij when the j-th event occurs in the i-th counting process. By treating the normalization constant Z as a free variable which will be tuned by cross validation later, we simply require that ∥w∥1 ⩽1. By applying the Gaussian RBF kernel, we can derive a closed form of gik and the gradient ▽ℓas gik = 1 2 erfc −t(Si, τk) √ 2h −erfc T −t(Si, τk) √ 2h , ▽ℓ = m X i=1 gi − X tij<T k(tij) w⊤k(tij) . (13) A pleasing feature of this formulation is that it is convex in the argument w, allowing us to apply various convex optimization techniques to solve the problem efficiently. Specifically, we first draw W random features τ1, . . . , τW from Q′(τ). Then, we precompute the jumping time t(Si, τw) for every source set {Si}m i=1 on each random feature {τw}W w=1. Because in general |Si| << n, this computation costs O(mW). Based on the achieved m-by-W jumping-time matrix, we preprocess the feature vectors {gi}m i=1 and k(tij), i ∈{1, . . . , m} , tij < T, which costs O(mW) and O(mLW) where L is the maximum number of events caused by a particular source set before time T. Finally, we apply the projected quasi-newton algorithm [12] to find the weight w that minimizes the negative log-likelihood of observing the given event data. Because the evaluation of the objective function and the gradient, which costs O(mLW), is much more expensive than the projection onto the convex constraint set, and L << n, the worst case computation complexity is thus O(mnW). Algorithm 1 summarizes the above steps in the end. Sample Strategy. One important constitution of our parametrization is to sample W random change points τ from a known distribution Q′(τ). Because given a set Si, we can only observe the jumping time of the events in each counting process without knowing the identity of the covered items (which is a key difference from [8]), the best thing we can do is to sample from these historical data. Specifically, let the number of counting processes that a single item s ∈V is involved to induce be Ns, and the collection of all the jumping timestamps before time T be Js. Then, for the s-th entry of τ, with probability |Js|/nNs, we uniformly draw a sample from Js; and with probability 1 −|Js|/nNs, we assign a time much greater than T to indicate that the item will never be covered until infinity. Given the very limited information, although this Q′(τ) might be quite different from Q(τ), by drawing sufficiently large number of samples and adjusting the weights, we expect it still can lead to good results, as illustrated in our experiments later. 6 Sample Complexity Suppose we use W random features and m training examples to compute an ϵℓ-MLE solution ba, i.e., ℓ(Dm|ba) ⩾max a′∈A ℓ(Dm|a′) −ϵℓ. The goal is to analyze how well the function bf induced by ba approximates the true function f. This sections describes the intuition and the complete proof is provided in the appendix. 5 A natural choice for connecting the error between f and bf with the log-likelihood cost used in MLE is the Hellinger distance [22]. So it suffices to prove an upper bound on the Hellinger distance h(a, ba) between ba and the true intensity a, for which we need to show a high probability bound on the (total) empirical Hellinger distance bH2(a, a′) between the two. Here, h and bH are defined as h2(a, a′) := 1 2ESEt hp a(S, t) − p a′(S, t) i2 , bH2(a, a′) := 1 2 m X i=1 Z T 0 hp a(Si, t) − p a′(Si, t) i2 dt. The key for the analysis is to show that the empirical Hellinger distance can be bounded by a martingale plus some other additive error terms, which we then bound respectively. This martingale is defined based on our hypotheses and the martingales Mi associated with the counting process Ni: M(t|g) := Z t 0 g(t)d X i Mi(t) ! = m X i=1 Z t 0 g(t)dMi(t) where g ∈G = n ga′ = 1 2 log a+a′ 2a : a′ ∈A o . More precisely, we have the following lemma. Lemma 3. Suppose ba is an ϵℓ-MLE. Then bH2 (ba, a) ⩽16M(T; gba) + 4 ℓ(Dm|a) −max a′∈A ℓ(Dm|a′) + 4ϵℓ. The right hand side has three terms: the martingale (estimation error), the likelihood gap between the truth and the best one in our hypothesis class (approximation error), and the optimization error. We then focus on bounding the martingale and the likelihood gap. To bound the martingale, we first introduce a notion called (d, d′)-covering dimension measuring the complexity of the hypothesis class, generalizing that in [25]. Based on this notion, we prove a uniform convergence inequality, combining the ideas in classic works on MLE [25] and counting process [13]. Compared to the classic uniform inequality, our result is more general, and the complexity notion has more clear geometric interpretation and are thus easier to verify. For the likelihood gap, recall that by Lemma 2, there exists an good approximation ˜a ∈A. The likelihood gap is then bounded by that between a and ˜a, which is small since a and ˜a are close. Combining the two leads to a bound on the Hellinger distance based on bounded dimension of the hypothesis class. We then show that the dimension of our specific hypothesis class is at most the number of random features W, and convert bH2(ba, a) to the desired ℓ2 error bound on f and bf. Theorem 4. Suppose W = ˜O Z2 ZT ϵ 5/2 + ZT ϵamin 5/4 and m = ˜O ZT ϵ [W + ϵℓ] . Then with probability ⩾1 −δ over the random sample of {τi}W i=1, we have that for any 0 ⩽t ⩽T, ES h bf(S, t) −f(S, t) i2 ⩽ϵ. The theorem shows that the number of random functions needed to achieve ϵ error is roughly O(ϵ−5/2), and the sample size is O(ϵ−7/2). They also depend on amin, which means with more random functions and data, we can deal with intensities with more extreme values. Finally, they increase with the time T, i.e., it is more difficult to learn the function values at later time points. 7 Experiments We evaluate TCOVERAGELEARNER on both synthetic and real world information diffusion data. We show that our method can be more robust to model misspecification than other state-of-the-art alternatives by learning a temporal coverage function all at once. 7.1 Competitors Because our input data only include pairs of a source set and the temporal information of its triggered events {(Si, Ni(t))}m i=1 with unknown identity, we first choose the general kernel ridge regression model as the major baseline, which directly estimates the influence value of a source set 6 1 2 3 4 5 6 7 8 9 10 0 5 10 15 Time MAE TCoverageLearner Kernel Ridge Regression CIC DIC 1 2 3 4 5 6 7 8 9 10 0 10 20 30 Time MAE TCoverageLearner Kernel Ridge Regression CIC DIC 1 2 3 4 5 6 7 8 9 10 0 5 10 15 Time MAE TCoverageLearner Kernel Ridge Regression CIC DIC 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 Time MAE TCoverageLearner Kernel Ridge Regression CIC DIC (a) Weibull (CIC) (b) Exponential (CIC) (c) DIC (d) LT Figure 1: MAE of the estimated influence on test data along time with the true diffusion model being continuous-time independent cascade with pairwise Weibull (a) and Exponential (b) transmission functions, (c) discrete-time independent cascade model and (d) linear-threshold cascade model. χS by f(χS) = k(χS)(K + λI)−1y where k(χS) = K(χSi, χS), and K is the kernel matrix. We discretize the time into several steps and fit a separate model to each of them. Between two consecutive time steps, the predictions are simply interpolated. In addition, to further demonstrate the robustness of TCOVERAGELEARNER, we compare it to the two-stage methods which must know the identity of the nodes involved in an information diffusion process to first learn a specific diffusion model based on which they can then estimate the influence. We give them such an advantage and study three well-known diffusion models : (I) Continuous-time Independent Cascade model(CIC)[14, 15]; (II) Discrete-time Independent Cascade model(DIC)[1]; and (III) Linear-Threshold cascade model(LT)[1]. 7.2 Influence Estimation on Synthetic Data We generate Kronecker synthetic networks ([0.9 0.5;0.5 0.3]) which mimic real world information diffusion patterns [16]. For CIC, we use both Weibull distribution (Wbl) and Exponential distribution (Exp) for the pairwise transmission function associated with each edge, and randomly set their parameters to capture the heterogeneous temporal dynamics. Then, we use NETRATE [14] to learn the model by assuming an exponential pairwise transmission function. For DIC, we choose the pairwise infection probability uniformly from 0 to 1 and fit the model by [17]. For LT, we assign the edge weight wuv between u and v as 1/dv, where dv is the degree of node v following [1]. Finally, 1,024 source sets are sampled with power-law distributed cardinality (with exponent 2.5), each of which induces eight independent cascades(or counting processes), and the test data contains another 128 independently sampled source sets with the ground truth influence estimated from 10,000 simulated cascades up to time T = 10. Figure 1 shows the MAE(Mean Absolute Error) between the estimated influence value and the true value up to the observation window T = 10. The average influence is 16.02, 36.93, 9.7 and 8.3. We use 8,192 random features and two-fold cross validation on the train data to tune the normalization Z, which has the best value 1130, 1160, 1020, and 1090, respectively. We choose the RBF kernel bandwidth h = 1/ √ 2π so that the magnitude of the smoothed approximate function still equals to 1 (or it can be tuned by cross-validation as well), which matches the original indicator function. For the kernel ridge regression, the RBF kernel bandwidth and the regularization λ are all chosen by the same two-fold cross validation. For CIC and DIC, we learn the respective model up to time T for once. Figure 1 verifies that even though the underlying diffusion models can be dramatically different, the prediction performance of TCOVERAGELEARNER is robust to the model changes and consistently outperforms the nontrivial baseline significantly. In addition, even if CIC and DIC are provided with extra information, in Figure 1(a), because the ground-truth is continuous-time diffusion model with Weibull functions, they do not have good performance. CIC assumes the right model but the wrong family of transmission functions. In Figure 1(b), we expect CIC should have the best performance for that it assumes the correct diffusion model and transmission functions. Yet, TCOVERAGELEARNER still has comparable performance with even less information. In Figure 1(c), although DIC has assumed the correct model, it is hard to determine the correct step size to discretize the time line, and since we only learn the model once up to time T (instead of at each time point), it is harder to fit the whole process. In Figure1(d), both CIC and DIC have the wrong model, so we have similar trend as Figure synthetic(a). Moreover, for kernel ridge regression, we have to first partition the timeline with arbitrary step size, fit the model to each of time, and interpolate the value between neighboring time legs. Not only will the errors from each stage be accumulated to the error of the final prediction, but also we cannot rely on this method to predict the influence of a source set beyond the observation window T. 7 1 2 3 4 5 6 7 0 5 10 15 20 25 Groups of Memes Average MAE TCoverageLearner Kernel Ridge Regression CIC DIC 128 256 512 1024 2048 4096 8192 0 2 4 6 8 10 # Random features Average MAE 128 256 512 1024 2048 4096 8192 10 0 10 1 10 2 10 3 time(s) # random features 1 2 3 4 5 6 7 8 9 10 20 40 60 80 100 Time influence TCoverageLearner Kernel Ridge Regression CIC DIC (a) Average MAE (b) Features’ Effect (c) Runtime (d) Influence maximization Figure 2: (a) Average MAE from time 1 to 10 on seven groups of real cascade data; (b) Improved estimation with increasing number of random features; (c) Runtime in log-log scale; (d) Maximized influence of selected sources on the held-out testing data along time. Overall, compared to the kernel ridge regression, TCOVERAGELEARNER only needs to be trained once given all the event data up to time T in a compact and principle way, and then can be used to infer the influence of any given source set at any particular time much more efficiently and accurately. In contrast to the two-stage methods, TCOVERAGELEARNER is able to address the more general setting with much less assumption and information but still can produce consistently competitive performance. 7.3 Influence Estimation on Real Data MemeTracker is a real-world dataset [18] to study information diffusion. The temporal flow of information was traced using quotes which are short textual phrases spreading through the websites. We have selected seven typical groups of cascades with the representative keywords like ‘apple and jobs’, ‘tsunami earthquake’, etc., among the top active 1,000 sites. Each set of cascades is split into 60%-train and 40%-test. Because we often can observe cascades only from single seed node, we rarely have cascades produced from multiple sources simultaneously. However, because our model can capture the correlation among multiple sources, we challenge TCOVERAGELEARNER with sets of randomly chosen multiple source nodes on the independent hold-out data. Although the generation of sets of multiple source nodes is simulated, the respective influence is calculated from the real test data as follows : Given a source set S, for each node u ∈S, let C(u) denote the set of cascades generated from u on the testing data. We uniformly sample cascades from C(u). The average length of all sampled cascades is treated as the true influence of S. We draw 128 source sets and report the average MAE along time in Figure 2(a). Again, we can observe that TCOVERAGELEARNER has consistent and robust estimation performance across all testing groups. Figure 2(b) verifies that the prediction can be improved as more random features are exploited, because the representational power of TCOVERAGELEARNER increases to better approximate the unknown true coverage function. Figure 2(c) indicates that the runtime of TCOVERAGELEARNER is able to scale linearly with large number of random features. Finally, Figure 2(d) shows the application of the learned coverage function to the influence maximization problem along time, which seeks to find a set of source nodes that maximize the expected number of infected nodes by time T. The classic greedy algorithm[19] is applied to solve the problem, and the influence is calculated and averaged over the seven held-out test data. It shows that TCOVERAGELEARNER is very competitive to the two-stage methods with much less assumption. Because the greedy algorithm mainly depends on the relative rank of the selected sources, although the estimated influence value can be different, the selected set of sources could be similar, so the performance gap is not large. 8 Conclusions We propose a new problem of learning temporal coverage functions with a novel parametrization connected with counting processes and develop an efficient algorithm which is guaranteed to learn such a combinatorial function from only polynomial number of training samples. Empirical study also verifies our method outperforms existing methods consistently and significantly. Acknowledgments This work was supported in part by NSF grants CCF-0953192, CCF-1451177, CCF-1101283, and CCF-1422910, ONR grant N00014-09-1-0751, AFOSR grant FA9550-09-10538, Raytheon Faculty Fellowship, NSF IIS1116886, NSF/NIH BIGDATA 1R01GM108341, NSF CAREER IIS1350983 and Facebook Graduate Fellowship 2014-2015. 8 References [1] David Kempe, Jon Kleinberg, and ´Eva Tardos. Maximizing the spread of influence through a social network. In SIGKDD 2003, pages 137–146. ACM, 2003. [2] C. Guestrin, A. Krause, and A. Singh. Near-optimal sensor placements in gaussian processes. In International Conference on Machine Learning ICML’05, 2005. [3] Benny Lehmann, Daniel Lehmann, and Noam Nisan. Combinatorial auctions with decreasing marginal utilities. In EC ’01, pages 18–28, 2001. [4] Maria-Florina Balcan and Nicholas JA Harvey. Learning submodular functions. In Proceedings of the 43rd annual ACM symposium on Theory of computing, pages 793–802. ACM, 2011. [5] A. Badanidiyuru, S. Dobzinski, H. Fu, R. D. Kleinberg, N. Nisan, and T. Roughgarden. Sketching valuation functions. In Annual ACM-SIAM Symposium on Discrete Algorithms, 2012. [6] Vitaly Feldman and Pravesh Kothari. Learning coverage functions. arXiv preprint arXiv:1304.2079, 2013. [7] Vitaly Feldman and Jan Vondrak. Optimal bounds on approximation of submodular and xos functions by juntas. In FOCS, 2013. [8] Nan Du, Yingyu Liang, Nina Balcan, and Le Song. Influence function learning in information diffusion networks. In ICML 2014, 2014. [9] L. Song, M. Kolar, and E. P. Xing. Time-varying dynamic bayesian networks. In Neural Information Processing Systems, pages 1732–1740, 2009. [10] M. Kolar, L. Song, A. Ahmed, and E. P. Xing. Estimating time-varying networks. Ann. Appl. Statist., 4(1):94–123, 2010. [11] Odd Aalen, Oernulf Borgan, and H˚akon K Gjessing. Survival and event history analysis: a process point of view. Springer, 2008. [12] M. P. Friedlander K. Murphy M. Schmidt, E. van den Berg. Optimizing costly functions with simple constraints: A limited-memory projected quasi-newton algorithm. In AISTATS 2009. [13] Sara van de Geer. Exponential inequalities for martingales, with application to maximum likelihood estimation for counting processes. The Annals of Statistics, pages 1779–1801, 1995. [14] Manuel Gomez Rodriguez, David Balduzzi, and Bernhard Sch¨olkopf. Uncovering the temporal dynamics of diffusion networks. arXiv preprint arXiv:1105.0697, 2011. [15] Nan Du, Le Song, Hongyuhan Zha, and Manuel Gomez Rodriguez. Scalable influence estimation in continuous time diffusion networks. In NIPS 2013, 2013. [16] Jure Leskovec, Deepayan Chakrabarti, Jon Kleinberg, Christos Faloutsos, and Zoubin Ghahramani. Kronecker graphs: An approach to modeling networks. 11(Feb):985–1042, 2010. [17] Praneeth Netrapalli and Sujay Sanghavi. Learning the graph of epidemic cascades. In SIGMETRICS/PERFORMANCE, pages 211–222. ACM, 2012. [18] Jure Leskovec, Lars Backstrom, and Jon Kleinberg. Meme-tracking and the dynamics of the news cycle. In SIGKDD2009, pages 497–506. ACM, 2009. [19] G. Nemhauser, L. Wolsey, and M. Fisher. An analysis of the approximations for maximizing submodular set functions. Mathematical Programming, 14:265–294, 1978. [20] L. Wasserman. All of Nonparametric Statistics. Springer, 2006. [21] Ali Rahimi and Benjamin Recht. Weighted sums of random kitchen sinks: Replacing minimization with randomization in learning. In Neural Information Processing Systems, 2009. [22] Sara van de Geer. Hellinger-consistency of certain nonparametric maximum likelihood estimators. The Annals of Statistics, pages 14–44, 1993. [23] G.R. Shorack and J.A. Wellner. Empirical Processes with Applications to Statistics. Wiley, New York, 1986. [24] Wing Hung Wong and Xiaotong Shen. Probability inequalities for likelihood ratios and convergence rates of sieve mles. The Annals of Statistics, pages 339–362, 1995. [25] L. Birg´e and P. Massart. Minimum Contrast Estimators on Sieves: Exponential Bounds and Rates of Convergence. Bernoulli, 4(3), 1998. [26] Kenneth S Alexander. Rates of growth and sample moduli for weighted empirical processes indexed by sets. Probability Theory and Related Fields, 75(3):379–423, 1987. 9
|
2014
|
145
|
5,231
|
Learning Shuffle Ideals Under Restricted Distributions Dongqu Chen Department of Computer Science Yale University dongqu.chen@yale.edu Abstract The class of shuffle ideals is a fundamental sub-family of regular languages. The shuffle ideal generated by a string set U is the collection of all strings containing some string u ∈U as a (not necessarily contiguous) subsequence. In spite of its apparent simplicity, the problem of learning a shuffle ideal from given data is known to be computationally intractable. In this paper, we study the PAC learnability of shuffle ideals and present positive results on this learning problem under element-wise independent and identical distributions and Markovian distributions in the statistical query model. A constrained generalization to learning shuffle ideals under product distributions is also provided. In the empirical direction, we propose a heuristic algorithm for learning shuffle ideals from given labeled strings under general unrestricted distributions. Experiments demonstrate the advantage for both efficiency and accuracy of our algorithm. 1 Introduction The learnablity of regular languages is a classic topic in computational learning theory. The applications of this learning problem include natural language processing (speech recognition, morphological analysis), computational linguistics, robotics and control systems, computational biology (phylogeny, structural pattern recognition), data mining, time series and music ([7, 14–18, 20, 21]). Exploring the learnability of the family of formal languages is significant to both theoretical and applied realms. Valiant’s PAC learning model introduces a clean and elegant framework for mathematical analysis of machine learning and is one of the most widely-studied theoretical learning models ([22]). In the PAC learning model, unfortunately, the class of regular languages, or equivalently the concept class of deterministic finite automata (DFA), is known to be inherently unpredictable ([1, 9, 19]). In a modified version of Valiant’s model which allows the learner to make membership queries, Angluin [2] has shown that the concept class of regular languages is PAC learnable. Throughout this paper we study the PAC learnability of a subclass of regular languages, the class of (extended) shuffle ideals. The shuffle ideal generated by an augmented string U is the collection of all strings containing some u ∈U as a (not necessarily contiguous) subsequence, where an augmented string is a finite concatenation of symbol sets (see Figure 1 for an illustration). The special class of shuffle ideals generated by a single string is called the principal shuffle ideals. Unfortunately, even such a simple class is not PAC learnable, unless RP=NP ([3]). However, in most application scenarios, the strings are drawn from some particular distribution we are interested in. Angluin et al. [3] prove under the uniform string distribution, principal shuffle ideals are PAC learnable. Nevertheless, the requirement of complete knowledge of the distribution, the dependence on the symmetry of the uniform distribution and the restriction of principal shuffle ideals lead to the lack of generality of the algorithm. Our main contribution in this paper is to present positive results 1 Figure 1: The DFA accepting precisely the shuffle ideal of U = (a|b|d)a(b|c) over Σ = {a, b, c, d}. on learning the class of shuffle ideals under element-wise independent and identical distributions and Markovian distributions. Extensions of our main results include a constrained generalization to learning shuffle ideals under product distributions and a heuristic method for learning principal shuffle ideals under general unrestricted distributions. After introducing the preliminaries in Section 2, we present our main result in Section 3: the extended class of shuffle ideals is PAC learnable from element-wise i.i.d. strings. That is, the distributions of the symbols in a string are identical and independent of each other. A constrained generalization to learning shuffle ideals under product distributions is also provided. In Section 4, we further show the PAC learnability of principal shuffle ideals when the example strings drawn from Σ≤n are generated by a Markov chain with some lower bound assumptions on the transition matrix. In Section 5, we propose a greedy algorithm for learning principal shuffle ideals under general unrestricted distributions. Experiments demonstrate the advantage for both efficiency and accuracy of our heuristic algorithm. 2 Preliminaries We consider strings over a fixed finite alphabet Σ. The empty string is λ. Let Σ∗be the Kleene star of Σ and Σ∪be the collection of all subsets of Σ. As strings are concatenations of symbols, we similarly define augmented strings as concatenations of unions of symbols. Definition 1 (Alphabet, simple string and augmented string) Let Σ be a non-empty finite set of symbols, called the alphabet. A simple string over Σ is any finite sequence of symbols from Σ, and Σ∗is the collection of all simple strings. An augmented string over Σ is any finite concatenation of symbol sets from Σ∪, and (Σ∪)∗is the collection of all augmented strings. Denote by s the cardinality of Σ. Because an augmented string only contains strings of the same length, the length of an augmented string U, denoted by |U|, is the length of any u ∈U. We use exponential notation for repeated concatenation of a string with itself, that is, vk is the concatenation of k copies of string v. Starting from index 1, we denote by vi the i-th symbol in string v and use notation v[i, j] = vi . . . vj for 1 ≤i ≤j ≤|v|. Define the binary relation ⊑on ⟨(Σ∪)∗, Σ∗⟩as follows. For a simple string w, w ⊑v holds if and only if there is a witness⃗i = (i1 < i2 < . . . < i|w|) such that vij = wj for all integers 1 ≤j ≤|w|. For an augmented string W, W ⊑v if and only if there exists some w ∈W such that w ⊑v. When there are several witnesses for W ⊑v, we may order them coordinate-wise, referring to the unique minimal element as the leftmost embedding. We will write IW ⊑v to denote the position of the last symbol of W in its leftmost embedding in v (if the latter exists; otherwise, IW ⊑v = ∞). Definition 2 (Extended/Principal Shuffle Ideal) The (extended) shuffle ideal of an augmented string U ∈(Σ∪)L is a regular language defined as X(U) = {v ∈Σ∗| ∃u ∈U, u ⊑v} = Σ∗U1Σ∗U2Σ∗. . . Σ∗ULΣ∗. A shuffle ideal is principal if it is generated by a simple string. A shuffle ideal is an ideal in order theory and was originally defined for lattices. Denote by the class of principal shuffle ideals and by X the class of extended shuffle ideals. Unless otherwise stated, in this paper shuffle ideal refers to the extended ideal. An example is given in Figure 1. The feasibility of determining whether a string is in the class X(U) is obvious. Lemma 1 Evaluating relation U ⊑x and meanwhile determining IU⊑x is feasible in time O(|x|). In a computational learning model, an algorithm is usually given access to an oracle providing information about the sample. In Valiant’s work [22], the example oracle EX(c, D) was defined, 2 where c is the target concept and D is a distribution over the instance space. On each call, EX(c, D) draws an input x independently at random from the instance space I under the distribution D, and returns the labeled example ⟨x, c(x)⟩. Definition 3 (PAC Learnability: [22]) Let C be a concept class over the instance space I. We say C is probably approximately correctly (PAC) learnable if there exists an algorithm A with the following property: for every concept c ∈C, for every distribution D on I, and for all 0 < ϵ < 1/2 and 0 < δ < 1/2, if A is given access to EX(c, D) on I and inputs ϵ and δ, then with probability at least 1 −δ, A outputs a hypothesis h ∈H satisfying Prx∈D[c(x) ̸= h(x)] ≤ϵ. If A runs in time polynomial in 1/ϵ, 1/δ and the representation size of c, we say that C is efficiently PAC learnable. We refer to ϵ as the error parameter and δ as the confidence parameter. If the error parameter is set to 0, the learning is exact ([6]). Kearns [11] extended Valiant’s model and introduced the statistical query oracle STAT(c, D). Kearns’ oracle takes as input a statistical query of the form (χ, τ). Here χ is any mapping of a labeled example to {0, 1} and τ ∈[0, 1] is called the noise tolerance. STAT(c, D) returns an estimate for the expectation IEχ, that is, the probability that χ = 1 when the labeled example is drawn according to D. A statistical query can have a condition so IEχ can be a conditional probability. This estimate is accurate within additive error τ. Definition 4 (Legitimacy and Feasibility: [11]) A statistical query χ is legimate and feasible if and only if with respect to 1/ϵ, 1/τ and representation size of c: 1. Query χ maps a labeled example ⟨x, c(x)⟩to {0, 1}; 2. Query χ can be efficiently evaluated in polynomial time; 3. The condition of χ, if any, can be efficiently evaluated in polynomial time; 4. The probability of the condition of χ, if any, should be at least polynomially large. Throughout this paper, the learnability of shuffle ideals is studied in the statistical query model. Kearns [11] proves that oracle STAT(c, D) is weaker than oracle EX(c, D). In words, if a concept class is PAC learnable from STAT(c, D), then it is PAC learnable from EX(c, D), but not necessarily vice versa. 3 Learning shuffle ideals from element-wise i.i.d. strings Although learning the class of shuffle ideals has been proved hard, in most scenarios the string distribution is restricted or even known. A very usual situation in practice is that we have some prior knowledge of the unknown distribution. One common example is the string distributions where each symbol in a string is generated independently and identically from an unknown distribution. It is element-wise i.i.d. because we view a string as a vector of symbols. This case is general enough to cover some popular distributions in applications such as the uniform distribution and the multinomial distribution. In this section, we present as our main result a statistical query algorithm for learning the concept class of extended shuffle ideals from element-wise i.i.d. strings and provide theoretical guarantees of its computational efficiency and accuracy in the statistical query model. The instance space is Σn. Denote by U the augmented pattern string that generates the target shuffle ideal and by L = |U| the length of U. 3.1 Statistical query algorithm Before presenting the algorithm, we define function θV,a(·) and query χV,a(·, ·) for any augmented string V ∈(Σ∪)≤n and any symbol a ∈Σ as as follows. θV,a(x) = a if V ̸⊑x[1, n −1] xIV ⊑x+1 if V ⊑x[1, n −1] χV,a(x, y) = 1 2(y + 1) given θV,a(x) = a 3 where y = c(x) is the label of example string x. More precisely, y = +1 if x ∈X(U) and y = −1 otherwise. Our learning algorithm uses statistical queries to recover string U ∈(Σ∪)L one element at a time. It starts with the empty string V = λ. Having recovered V = U[1, ℓ] where 0 ≤ℓ< L, we infer Uℓ+1 as follows. For each a ∈Σ, the statistical query oracle is called with the query χV,a at the error tolerance τ claimed in Theorem 1. Our key technical observation is that the value of IEχV,a effectively selects Uℓ+1. The query results of χV,a will form two separate clusters such that the maximum difference (variance) inside one cluster is smaller than the minimum difference (gap) between the two clusters, making them distinguishable. The set of symbols in the cluster with larger query results is proved to be Uℓ+1. Notice that this statistical query only works for 0 ≤ℓ< L. To complete the algorithm, the algorithm addresses the trivial case ℓ= L with query Pr[y = +1 | V ⊑x] and halts if the query answer is close to 1. 3.2 PAC learnability of ideal X We show the algorithm described above learns the class of shuffle ideals from element-wise i.i.d. strings in the statistical query learning model. Theorem 1 Under element-wise independent and identical distributions over instance space I = Σn, concept class X is approximately identifiable with O(sn) conditional statistical queries from STAT(X, D) at tolerance τ = ϵ2 40sn2 + 4ϵ or with O(sn) statistical queries from STAT(X, D) at tolerance ¯τ = 1 − ϵ 20sn2 + 2ϵ ϵ4 16sn(10sn2 + ϵ) We provide the main idea of the proofs in this section and defer the details and algebra to Appendix A. The proof starts from the legitimacy and feasibility of the algorithm. Since χV,a computes a binary mapping from labeled examples to {0, 1}, the legitimacy is trivial. But χV,a is not feasible for symbols in Σ of small occurrence probabilities. We avoid the problematic cases by reducing the original learning problem to the same problem with a polynomial lower bound assumption Pr[xi = a] ≥ϵ/(2sn) −ϵ2/(20sn2 + 2ϵ) for any a ∈Σ and achieve feasibility. The correctness of the algorithm is based on the intuition that the query result IEχV,a+ of a symbol a+ ∈Uℓ+1 should be greater than that of a symbol a−̸∈Uℓ+1 and the difference is large enough to tolerate the noise from the oracle. To prove this, we first consider the exact learning case. Define an infinite string U ′ = U[1, ℓ]U[ℓ+ 2, L]U ∞ ℓ+1 and let x′ = xΣ∞be the extension of x obtained by padding it on the right with an infinite string generated from the same distribution as x. Let Q(j, i) be the probability that the largest g such that U ′[1, g] ⊑x′[1, i] is j, or formally Q(j, i) = Pr[U ′[1, j] ⊑x′[1, i] ∧U ′[1, j + 1] ̸⊑x′[1, i]] By taking the difference between IEχV,a+ and IEχV,a−in terms of Q(j, i), we get the query tolerance for exact learning. Lemma 2 Under element-wise independent and identical distributions over instance space I = Σn, concept class X is exactly identifiable with O(sn) conditional statistical queries from STAT(X, D) at tolerance τ ′ = 1 5Q(L −1, n −1) Lemma 2 indicates bounding the quantity Q(L −1, n −1) is the key to the tolerance for PAC learning. Unfortunately, the distribution {Q(j, i)} doesn’t seem to have any strong properties we know of providing a polynomial lower bound. Instead we introduce new quantity R(j, i) = Pr[U ′[1, j] ⊑x′[1, i] ∧U ′[1, j] ̸⊑x′[1, i −1]] being the probability that the smallest g such that U ′[1, j] ⊑x′[1, g] is i. An important property of distribution {R(j, i)} is its strong unimodality as defined below. 4 Definition 5 (Unimodality: [8]) A distribution {P(i)} with all support on the lattice of integers is unimodal if and only if there exists at least one integer K such that P(i) ≥P(i −1) for all i ≤K and P(i + 1) ≤P(i) for all i ≥K. We say K is a mode of distribution {P(i)}. Throughout this paper, when referring to the mode of a distribution, we mean the one with the largest index, if the distribution has multiple modes with equal probabilities. Definition 6 (Strong Unimodality: [10]) A distribution {H(i)} is strongly unimodal if and only if the convolution of {H(i)} with any unimodal distribution {P(i)} is unimodal. Since a distribution with all mass at zero is unimodal, a strongly unimodal distribution is also unimodal. In this paper, we only consider distributions with all support on the lattice of integers. So the convolution of {H(i)} and {P(i)} is {H ∗P}(i) = ∞ X j=−∞ H(j)P(i −j) = ∞ X j=−∞ H(i −j)P(j) We prove the strong unimodality of {R(j, i)} with respect to i via showing it is the convolution of two log-concave distributions by induction. We do an initial statistical query to estimate Pr[y = +1] to handle two marginal cases Pr[y = +1] ≤ϵ/2 and Pr[y = +1] ≥1−ϵ/2. After that an additional query Pr[y = +1 | V ⊑x] is made to tell whether ℓ= L. If the algorithm doesn’t halt, it means ℓ< L and both Pr[y = +1] and Pr[y = −1] are at least ϵ/2 −2τ. By upper bounding Pr[y = +1] and Pr[y = −1] using linear sums of R(j, i), the strong unimodality of {R(j, i)} gives a lower bound for R(L, n), which further implies one for Q(L −1, n −1) and completes the proof. 3.3 A generalization to instance space Σ≤n We have proved the extended class of shuffle ideals is PAC learnable from element-wise i.i.d. fixedlength strings. Nevertheless, in many real-world applications such as natural language processing and computational linguistics, it is more natural to have strings of varying lengths. Let n be the maximum length of the sample strings and as a consequence the instance space for learning is Σ≤n. Here we show how to generalize the statistical query algorithm in Section 3.1 to the more general instance space Σ≤n. Let Ai be the algorithm in Section 3.1 for learning shuffle ideals from element-wise i.i.d. strings of fixed length i. Because instance space Σ≤n = S i≤n Σi, we divide the sample S into n subsets {Si} where Si = {x | |x| = i}. An initial statistical query then is made to estimate probability Pr[|x| = i] for each i ≤n at tolerance ϵ/(8n). We discard all subsets Si with query answer ≤3ϵ/(8n) in the learning procedure, because we know Pr[|x| = i] ≤ϵ/(2n). As there are at most (n −1) such Si of low occurrence probabilities. The total probability that an instance comes from one of these negligible sets is at most ϵ/2. Otherwise, Pr[|x| = i] ≥ϵ/(4n) and we apply algorithm Ai on each Si with query answer ≥3ϵ/(8n) with error parameter ϵ/2. Because the probability of the condition is polynomially large, the algorithm is feasible. Finally, the total error over the whole instance space will be bounded by ϵ and concept class X is PAC learnable from element-wise i.i.d. strings over instance space Σ≤n. Corollary 1 Under element-wise independent and identical distributions over instance space I = Σ≤n, concept class X is approximately identifiable with O(sn2) conditional statistical queries from STAT(X, D) at tolerance τ = ϵ2 160sn2 + 8ϵ or with O(sn2) statistical queries from STAT(X, D) at tolerance ¯τ = 1 − ϵ 40sn2 + 2ϵ ϵ5 512sn2(20sn2 + ϵ) 5 3.4 A constrained generalization to product distributions A direct generalization from element-wise independent and identical distributions is product distributions. A random string, or a random vector of symbols under a product distribution has element-wise independence between its elements. That is, Pr[X = x] = Q|x| i=1 Pr[Xi = xi]. Although strings under product distributions share many independence properties with element-wise i.i.d. strings, the algorithm in Section 3.1 is not directly applicable to this case as the distribution {R(j, i)} defined above is not unimodal with respect to i in general. However, the intuition that given IV ⊑x = h, the strings with xh+1 ∈Uℓ+1 have higher probability of positivity than that of the strings with xh+1 ̸∈Uℓ+1 is still true under product distributions. Thus we generalize query χV,a and define for any V ∈(Σ∪)≤n, a ∈Σ and h ∈[0, n −1], ˜χV,a,h(x, y) = 1 2(y + 1) given IV ⊑x = h and xh+1 = a where y = c(x) is the label of example string x. To ensure the legitimacy and feasibility of the algorithm, we have to attach a lower bound assumption that Pr[xi = a] ≥t > 0, for ∀1 ≤i ≤n and ∀a ∈Σ. Appendix C provides a constrained algorithm based on this intuition. Let P(+|a, h) denote IE˜χV,a,h. If the difference P(+|a+, h)−P(+|a−, h) is large enough for some h with nonnegligible Pr[IV ⊑x = h], then we are able to learn the next element in U. Otherwise, the difference is very small and we will show that there is an interval starting from index (h + 1) which we can skip with little risk. The algorithm is able to classify any string whose classification process skips O(1) intervals. Details of this constrained generalization are deferred to Appendix C. 4 Learning principal shuffle ideals from Markovian strings Markovian strings are widely studied in natural language processing and biological sequence modeling. Formally, a random string x is Markovian if the distribution of xi+1 only depends on the value of xi: Pr[xi+1 | x1 . . . xi] = Pr[xi+1 | xi] for any i ≥1. If we denote by π0 the distribution of x1 and define s × s stochastic matrix M by M(a1, a2) = Pr[xi+1 = a1 | xi = a2], then a random string can be viewed as a Markov chain with initial distribution π0 and transition matrix M. We choose Σ≤n as the instance space in this section and assume independence between the string length and the symbols in the string. We assume Pr[|x| = k] ≥t for all 1 ≤k ≤n and min{M(·, ·), π0(·)} ≥c for some positive t and c. We will prove the PAC learnability of class under this lower bound assumption. Denote by u be the target pattern string and let L = |u|. 4.1 Statistical query algorithm Starting with empty string v = λ, the pattern string u is recovered one symbol at a time. Having recovered v = u[1, ℓ], we infer uℓ+1 by Ψv,a = Pn k=h+1 IEχv,a,k(x, y), where χv,a,k(x, y) = 1 2(y + 1) given Iv⊑x = h, xh+1 = a and |x| = k 0 ≤ℓ< L and h is chosen from [0, n −1] such that the probability Pr[Iv⊑x = h] is polynomially large. The statistical queries χv,a,k are made at tolerance τ claimed in Theorem 2 and the symbol with the largest query result of Ψv,a is proved to be uℓ+1. Again, the case where ℓ= L is addressed by query Pr[y = +1 | v ⊑x]. The learning procedure is completed if the query result is close to 1. 4.2 PAC learnability of principal ideal With query Ψv,a, we are able to recover the pattern string u approximately from STAT( (u), D) at proper tolerance as stated in Theorem 2: Theorem 2 Under Markovian string distributions over instance space I = Σ≤n, given Pr[|x| = k] ≥t > 0 for ∀1 ≤k ≤n and min{M(·, ·), π0(·)} ≥c > 0, concept class is approximately identifiable with O(sn2) conditional statistical queries from STAT( , D) at tolerance τ = ϵ 3n2 + 2n + 2 6 or with O(sn2) statistical queries from STAT( , D) at tolerance ¯τ = 3ctnϵ2 (3n2 + 2n + 2)2 Please refer to Appendix B for a complete proof of Theorem 2. Due to the probability lower bound assumptions, the legitimacy and feasibility are obvious. To calculate the tolerance for PAC learning, we first consider the exact learning tolerance. Let x′ be an infinite string generated by the Markov chain defined above. For any 0 ≤ℓ≤L −j, we define quantity Rℓ(j, i) by Rℓ(j, i) = Pr[u[ℓ+1, ℓ+j] ⊑x′[m+1, m+i]∧u[ℓ+1, ℓ+j] ̸⊑x′[m+1, m+i−1] | x′ m = uℓ] Intuitively, Rℓ(j, i) is the probability that the smallest g such that u[ℓ+ 1, ℓ+ j] ⊑x′[m + 1, m + g] is i, given x′ m = uℓ. We have the following conclusion on the exact learning tolerance. Lemma 3 Under Markovian string distributions over instance space I = Σ≤n, given Pr[|x| = k] ≥t > 0 for ∀1 ≤k ≤n and min{M(·, ·), π0(·)} ≥c > 0, the concept class is exactly identifiable with O(sn2) conditional statistical queries from STAT( , D) at tolerance τ ′ = min 0≤ℓ<L ( 1 3(n −h) n X k=h+1 Rℓ+1(L −ℓ−1, k −h −1) ) The algorithm first deals with the marginal case where P[y = +1] ≤ϵ through query Pr[y = +1]. If it doesn’t halt, we know Pr[y = +1] is at least (3n2 + 2n)ϵ/(3n2 + 2n + 2). We then make a statistical query χ′ h(x, y) = 1 2(y + 1) · 1{Iv⊑x=h} for each h from ℓto n −1. It can be shown that at least one h will give an answer ≥(3n + 1)ϵ/(3n2 + 2n + 2). This implies lower bounds for Pr[Iv⊑x = h] and Pr[y = +1 | Iv⊑x = h]. The former guarantees the feasibility while the latter can serve as a lower bound for the sum in Lemma 3 after some algebra and completes the proof. The assumption on M and π0 can be weakened to M(uℓ+1, uℓ) = Pr[x2 = uℓ+1 | x1 = uℓ] ≥c and π0(u1) ≥c for all 1 ≤ℓ≤L −1. We first make a statistical query to estimate M(a, uℓ) for ℓ≥1 or π0(a) for ℓ= 0 for each symbol a ∈Σ at tolerance c/3. If the result is ≤2c/3 then M(a, uℓ) ≤c or π0(a) ≤c and we won’t consider symbol a at this position. Otherwise, M(a, uℓ) ≥c/3 or π0(a) ≥c/3 and the queries in the algorithm are feasible. Corollary 2 Under Markovian string distributions over instance space I = Σ≤n, given Pr[|x| = k] ≥t > 0 for ∀1 ≤k ≤n, π0(u1) ≥c and M(uℓ+1, uℓ) ≥c > 0 for ∀1 ≤ℓ≤L −1, concept class is approximately identifiable with O(sn2) conditional statistical queries from STAT( , D) at tolerance τ = min ϵ 3n2 + 2n + 2, c 3 or with O(sn2) statistical queries from STAT( , D) at tolerance ¯τ = min ctnϵ2 (3n2 + 2n + 2)2 , tnϵc2 3(3n2 + 2n + 2) 5 Learning shuffle ideals under general distributions Although the string distribution is restricted or even known in most application scenarios, one might be interested in learning shuffle ideals under general unrestricted and unknown distributions without any prior knowledge. Unfortunately, under standard complexity assumptions, the answer is negative. Angluin et al. [3] have shown that a polynomial time PAC learning algorithm for principal shuffle ideals would imply the existence of polynomial time algorithms to break the RSA cryptosystem, factor Blum integers, and test quadratic residuosity. Theorem 3 ([3]) For any alphabet of size at least 2, given two disjoint sets of strings S, T ⊂Σ≤n, the problem of determining whether there exists a string u such that u ⊑x for each x ∈S and u ̸⊑x for each x ∈T is NP-complete. 7 As ideal is a subclass of ideal X, we know learning ideal X is only harder. Is the problem easier over instance space Σn? The answer is again no. Lemma 4 Under general unrestricted string distributions, a concept class is PAC learnable over instance space Σ≤n if and only if it is PAC learnable over instance space Σn. The proof of Lemma 4 is presented in Appendix D using the same idea as our generalization in Section 3.3. Note that Lemma 4 holds under general string distributions. It is not necessarily true when we have assumptions on the marginal distribution of string length. Despite the infeasibility of PAC learning a shuffle ideal in theory, it is worth exploring the possibilities to do the classification problem without theoretical guarantees, since most applications care more about the empirical performance than about theoretical results. For this purpose we propose a heuristic greedy algorithm for learning principal shuffle ideals based on reward strategy as follows. Upon having recovered v = bu[1, ℓ], for a symbol a ∈Σ and a string x of length n, we say a consumes k elements in x if min{Iva⊑x, n + 1} −Iv⊑x = k. The reward strategy depends on the ratio r+/r−: the algorithm receives r−reward from each element it consumes in a negative example or r+ penalty from each symbol it consumes in a positive string. A symbol is chosen as buℓ+1 if it brings us most reward. The algorithm will halt once bu exhausts any positive example and makes a false negative error, which means we have gone too far. Finally the ideal (bu[1, ℓ−1]) is returned as the hypothesis. The performance of this greedy algorithm depends a great deal on the selection of parameter r+/r−. A clever choice is r+/r−= #(−)/#(+), where #(+) is the number of positive examples x such that bu ⊑x and #(−) is the number of negative examples x such that bu ⊑x. A more recommended but more complex strategy to determine the parameter r+/r−in practice is cross validation. A better studied approach to learning regular languages, especially the piecewise-testable ones, in recent works is kernel machines ([12, 13]). An obvious advantage of kernel machines over our greedy method is its broad applicability to general classification learning problems. Nevertheless, the time complexity of the kernel machine is O(N 3 + n2N 2) on a training sample set of size N ([5]), while our greedy method only takes O(snN) time due to its great simplicity. Because N is usually huge for the demand of accuracy, kernel machines suffer from low efficiency and long running time in practice. To make a comparison between the greedy method and kernel machines for empirical performance, we conducted a series of experiments on a real world dataset [4] with string length n as a variable. The experiment results demonstrate the empirical advantage on both efficiency and accuracy of the greedy algorithm over the kernel method, in spite of its simplicity. As this is a theoretical paper, we defer the details on the experiments to Appendix D, including the experiment setup and figures of detailed experiment results. 6 Discussion We have shown positive results for learning shuffle ideals in the statistical query model under element-wise independent and identical distributions and Markovian distributions, as well as a constrained generalization to product distributions. It is still open to explore the possibilities of learning shuffle ideals under less restricted distributions with weaker assumptions. Also a lot more work needs to be done on approximately learning shuffle ideals in applications with pragmatic approaches. In the negative direction, even a family of regular languages as simple as the shuffle ideals is not efficiently properly PAC learnable under general unrestricted distributions unless RP=NP. Thus, the search for a nontrivial properly PAC learnable family of regular languages continues. Another theoretical question that remains is how hard the problem of learning shuffle ideals is, or whether PAC learning a shuffle ideal is as hard as PAC learning a deterministic finite automaton. Acknowledgments We give our sincere gratitude to Professor Dana Angluin of Yale University for valuable discussions and comments on the learning problem and the proofs. Our thanks are also due to Professor Joseph Chang of Yale University for suggesting supportive references on strong unimodality of probability distributions and to the anonymous reviewers for their helpful feedback. 8 References [1] D. Angluin. On the complexity of minimum inference of regular sets. Information and Control, 39(3):337 – 350, 1978. [2] D. Angluin. Learning regular sets from queries and counterexamples. Information and Computation, 75(2):87–106, Nov. 1987. [3] D. Angluin, J. Aspnes, S. Eisenstat, and A. Kontorovich. On the learnability of shuffle ideals. Journal of Machine Learning Research, 14:1513–1531, 2013. [4] K. Bache and M. Lichman. NSF research award abstracts 1990-2003 data set. UCI Machine Learning Repository, 2013. [5] L. Bottou and C.-J. Lin. Support vector machine solvers. Large scale kernel machines, pages 301–320, 2007. [6] N. H. Bshouty. Exact learning of formulas in parallel. Machine Learning, 26(1):25–41, Jan. 1997. [7] C. de la Higuera. A bibliographical study of grammatical inference. Pattern Recognition, 38(9):1332–1348, Sept. 2005. [8] B. Gnedenko and A. N. Kolmogorov. Limit distributions for sums of independent random variables. Addison-Wesley series in statistics, 1949. [9] E. M. Gold. Complexity of automaton identification from given data. Information and Control, 37(3):302 – 320, 1978. [10] I. Ibragimov. On the composition of unimodal distributions. Theory of Probability and Its Applications, 1(2):255–260, 1956. [11] M. Kearns. Efficient noise-tolerant learning from statistical queries. Journal of the ACM (JACM), 45(6):983–1006, Nov. 1998. [12] L. A. Kontorovich, C. Cortes, and M. Mohri. Kernel methods for learning languages. Theoretical Computer Science, 405(3):223–236, Oct. 2008. [13] L. A. Kontorovich and B. Nadler. Universal kernel-based learning with applications to regular languages. The Journal of Machine Learning Research, 10:1095–1129, June 2009. [14] K. Koskenniemi. Two-level model for morphological analysis. Proceedings of the Eighth International Joint Conference on Artificial Intelligence - Volume 2, pages 683–685, 1983. [15] M. Mohri. On some applications of finite-state automata theory to natural language processing. Journal of Natural Language Engineering, 2(1):61–80, Mar. 1996. [16] M. Mohri. Finite-state transducers in language and speech processing. Computational Linguistics, 23(2):269–311, June 1997. [17] M. Mohri, P. J. Moreno, and E. Weinstein. Efficient and robust music identification with weighted finite-state transducers. IEEE Transactions on Audio, Speech, and Language Processing, 18(1):197–207, Jan. 2010. [18] M. Mohri, F. Pereira, and M. Riley. Weighted finite-state transducers in speech recognition. Computer Speech and Language, 16(1):69 – 88, 2002. [19] L. Pitt and M. K. Warmuth. The minimum consistent DFA problem cannot be approximated within any polynomial. Journal of the ACM (JACM), 40(1):95–142, Jan. 1993. [20] O. Rambow, S. Bangalore, T. Butt, A. Nasr, and R. Sproat. Creating a finite-state parser with application semantics. Proceedings of the 19th International Conference on Computational Linguistics - Volume 2, pages 1–5, 2002. [21] R. Sproat, W. Gale, C. Shih, and N. Chang. A stochastic finite-state word-segmentation algorithm for Chinese. Computational Linguistics, 22(3):377–404, Sept. 1996. [22] L. G. Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134–1142, Nov. 1984. 9
|
2014
|
146
|
5,232
|
Spectral Clustering of Graphs with the Bethe Hessian Alaa Saade Laboratoire de Physique Statistique, CNRS UMR 8550 ´Ecole Normale Superieure, 24 Rue Lhomond Paris 75005 Florent Krzakala∗ Sorbonne Universit´es, UPMC Univ Paris 06 Laboratoire de Physique Statistique, CNRS UMR 8550 ´Ecole Normale Superieure, 24 Rue Lhomond Paris 75005 Lenka Zdeborov´a Institut de Physique Th´eorique CEA Saclay and CNRS URA 2306 91191 Gif-sur-Yvette, France Abstract Spectral clustering is a standard approach to label nodes on a graph by studying the (largest or lowest) eigenvalues of a symmetric real matrix such as e.g. the adjacency or the Laplacian. Recently, it has been argued that using instead a more complicated, non-symmetric and higher dimensional operator, related to the non-backtracking walk on the graph, leads to improved performance in detecting clusters, and even to optimal performance for the stochastic block model. Here, we propose to use instead a simpler object, a symmetric real matrix known as the Bethe Hessian operator, or deformed Laplacian. We show that this approach combines the performances of the non-backtracking operator, thus detecting clusters all the way down to the theoretical limit in the stochastic block model, with the computational, theoretical and memory advantages of real symmetric matrices. Clustering a graph into groups or functional modules (sometimes called communities) is a central task in many fields ranging from machine learning to biology. A common benchmark for this problem is to consider graphs generated by the stochastic block model (SBM) [7, 22]. In this case, one considers n vertices and each of them has a group label gv ∈{1, . . . , q}. A graph is then created as follows: all edges are generated independently according to a q × q matrix p of probabilities, with Pr[Au,v = 1] = pgu,gv. The group labels are hidden, and the task is to infer them from the knowledge of the graph. The stochastic block model generates graphs that are a generalization of the Erd˝os-R´enyi ensemble where an unknown labeling has been hidden. We concentrate on the sparse case, where algorithmic challenges appear. In this case pab is O(1/n), and we denote pab = cab/n. For simplicity we concentrate on the most commonly-studied case where groups are equally sized, cab = cin if a = b and cab = cout if a ̸= b. Fixing cin > cout is referred to as the assortative case, because vertices from the same group connect with higher probability than with vertices from other groups. cout > cin is called the disassortative case. An important conjecture [4] is that any tractable algorithm will only detect communities if |cin −cout| > q√c , (1) where c is the average degree. In the case of q = 2 groups, in particular, this has been rigorously proven [15, 12] (in this case, one can also prove that no algorithm could detect communities if this condition is not met). An ideal clustering algorithm should have a low computational complexity while being able to perform optimally for the stochastic block model, detecting clusters down to the transition (1). ∗This work has been supported in part by the ERC under the European Union’s 7th Framework Programme Grant Agreement 307087-SPARCS 1 So far there are two algorithms in the literature able to detect clusters down to the transition (1). One is a message-passing algorithm based on belief-propagation [5, 4]. This algorithm, however, needs to be fed with the correct parameters of the stochastic block model to perform well, and its computational complexity scales quadratically with the number of clusters, which is an important practical limitation. To avoid such problems, the most popular non-parametric approaches to clustering are spectral methods, where one classifies vertices according to the eigenvectors of a matrix associated with the network, for instance its adjacency matrix [11, 16]. However, while this works remarkably well on regular, or dense enough graphs [2], the standard versions of spectral clustering are suboptimal on graphs generated by the SBM, and in some cases completely fail to detect communities even when other (more complex) algorithms such as belief propagation can do so. Recently, a new class of spectral algorithms based on the use of a non-backtracking walk on the directed edges of the graph has been introduced [9] and argued to be better suited for spectral clustering. In particular, it has been shown to be optimal for graphs generated by the stochastic block model, and able to detect communities even in the sparse case all the way down to the theoretical limit (1). These results are, however, not entirely satisfactory. First, the use a of a high-dimensional matrix (of dimension 2m - where m is the number of edges - rather than n, the number of nodes) can be expensive, both in terms of computational time and memory. Secondly, linear algebra methods are faster and more efficient for symmetric matrices than non-symmetric ones. The first problem was partially resolved in [9] where an equivalent operator of dimensions 2n was shown to exist. It was still, however, a non-symmetric one and more importantly, the reduction does not extend to weighted graphs, and thus presents a strong limitation. In this contribution, we provide the best of both worlds: a non-parametric spectral algorithm for clustering with a symmetric n × n, real operator that performs as well as the non-backtracking operator of [9], in the sense that it identifies communities as soon as (1) holds. We show numerically that our approach performs as well as the belief-propagation algorithm, without needing prior knowledge of any parameter, making it the simplest algorithmically among the best-performing clustering methods. This operator is actually not new, and has been known as the Bethe Hessian in the context of statistical physics and machine learning [14, 17] or the deformed Laplacian in other fields. However, to the best of our knowledge, it has never been considered in the context of spectral clustering. The paper is organized as follows. In Sec. 1 we give the expression of the Bethe Hessian operator. We discuss in detail its properties and its connection with both the non-backtracking operator and an Ising spin glass in Sec. 2. In Sec. 3, we study analytically the spectrum in the case of the stochastic block model. Finally, in Sec. 4 we perform numerical tests on both the stochastic block model and on some real networks. 1 Clustering based on the Bethe Hessian matrix Let G = (V, E) be a graph with n vertices, V = {1, ..., n}, and m edges. Denote by A its adjacency matrix, and by D the diagonal matrix defined by Dii = di, ∀i ∈V , where di is the degree of vertex i. We then define the Bethe Hessian matrix, sometimes called the deformed Laplacian, as H(r) := (r2 −1)1 −rA + D , (2) where |r| > 1 is a regularizer that we will set to a well-defined value |r| = rc depending on the graph, for instance rc = √c in the case of the stochastic block model, where c is the average degree of the graph (see Sec. 2.1). The spectral algorithm that is the main result of this paper works as follows: we compute the eigenvectors associated with the negative eigenvalues of both H(rc) and H(−rc), and cluster them with a standard clustering algorithm such as k-means (or simply by looking at the sign of the components in the case of two communities). The negative eigenvalues of H(rc) reveal the assortative aspects, while those of H(−rc) reveal the disassortative ones. Figure 1 illustrates the spectral properties of the Bethe Hessian (2) for networks generated by the stochastic block model. When r =±√c the informative eigenvalues (i.e. those having eigenvectors correlated to the cluster structure) are the negative ones, while the non-informative bulk remains positive. There are as many negative eigenvalues as there are hidden clusters. It is thus straightforward to select the relevant eigenvectors. This is very unlike the situation for the operators used in standard spectral clustering algorithms (except, again, for the non-backtracking operator) where 2 0 20 40 60 0 0.05 0.1 0.15 0.2 ν(λ) r= 5 0 20 40 60 0 0.05 0.1 0.15 0.2 r= 4 0 20 40 60 0 0.05 0.1 0.15 0.2 r= 3 0 20 40 60 0 0.05 0.1 0.15 0.2 λ ν(λ) r= 2 0 20 40 60 0 0.05 0.1 0.15 0.2 λ r= 1.5 0 20 40 60 0 0.05 0.1 0.15 0.2 λ r= 1.1 0 20 40 60 0 0.05 0.1 0.15 0.2 ν(λ) r= 5 0 20 40 0 0.05 0.1 0.15 0.2 r= 4 0 20 40 60 0 0.05 0.1 0.15 0.2 λ ν(λ) r= 2 0 20 40 0 0.05 0.1 0.15 0.2 λ r= 1.5 0 20 40 60 0 0.05 0.1 0.15 0.2 ν(λ) r= 5 0 20 40 60 0 0.05 0.1 0.15 0.2 r= 4 0 0 0.05 0.1 0.15 0.2 r= 3 0 20 40 60 0 0.05 0.1 0.15 0.2 λ ν(λ) r= 2 0 20 40 60 0 0.05 0.1 0.15 0.2 λ r= 1.5 0 0 0.05 0.1 0.15 0.2 r= 0 20 40 60 r= 5 0 20 40 60 0 0.05 0.1 0.15 0.2 r= 4 0 20 40 0 0.05 0.1 0.15 0.2 r= 3 0 20 40 60 λ r= 2 0 20 40 60 0 0.05 0.1 0.15 0.2 λ r= 1.5 0 20 40 0 0.05 0.1 0.15 0.2 λ r= 1.1 0 20 40 60 0 0.05 0 20 40 60 0 0.05 0 0.05 0 20 40 60 0 0.05 0.1 0.15 0.2 λ ν(λ) r= 2 0 20 40 60 0 0.05 0.1 0.15 0.2 λ r= 1.5 0 0.05 0.1 0.15 0.2 0 20 40 60 0 0.05 0 20 0 0.05 0 20 40 60 0 0.05 0.1 0.15 0.2 λ ν(λ) r= 2 0 20 0 0.05 0.1 0.15 0.2 λ r= 1.5 0 20 40 60 0 20 40 60 0 0.05 0.1 0 20 40 0 0.05 0.1 0 20 40 60 λ r= 2 0 20 40 60 0 0.05 0.1 0.15 0.2 λ r= 1.5 0 20 40 0 0.05 0.1 0.15 0.2 λ r= 1.1 0 20 40 60 0 0.05 0.1 0.15 0.2 r= 5 0 20 40 60 0 0.05 0.1 0.15 0.2 r= 4 0 20 40 60 0 0.05 0.1 0.15 0.2 r= 3 0 20 40 60 0 0.05 0.1 0.15 0.2 λ r= 2 0 20 40 60 0 0.05 0.1 0.15 0.2 λ r= 1.5 0 20 40 60 0 0.05 0.1 0.15 0.2 λ r= 1.1 0 20 40 60 0 0.05 0.1 0.15 0.2 ν(λ) r= 5 0 20 40 60 0 0.05 0.1 0.15 0.2 r= 4 0 20 40 60 0 0.05 0.1 0.15 0.2 r= 3 0 20 40 60 0 0.05 0.1 0.15 0.2 λ ν(λ) r= 2 0 20 40 60 0 0.05 0.1 0.15 0.2 λ r= 1.5 0 20 40 60 0 0.05 0.1 0.15 0.2 λ r= 1.1 0 0 0 0 0 0 0 20 40 60 0 05 .1 15 0 20 40 60 0 0.05 0.1 0.15 0 0 0.05 0.1 0.15 0.2 0 20 40 60 0 0.05 0.1 0.15 0.2 λ ν(λ) r= 2 0 20 40 60 0 0.05 0.1 0.15 0.2 λ r= 1.5 0 0 0.05 0.1 0.15 0.2 0 20 40 60 0 0.05 0.1 0.15 ν(λ) 0 20 4 0 0.05 0.1 0.15 0 20 40 60 0 0.05 0.1 0.15 0.2 λ ν(λ) r= 2 0 20 4 0 0.05 0.1 0.15 0.2 λ r= 1.5 40 60 0 20 40 60 0 0.05 0.1 0.15 0 20 40 0 0.05 0.1 0.15 0 40 60 λ r= 2 0 20 40 60 0 0.05 0.1 0.15 0.2 λ r= 1.5 0 20 40 0 0.05 0.1 0.15 0.2 λ r= 1.1 0 20 40 60 0 0.05 0.1 0.15 0.2 λ ν(λ) r 2 0 0 0.05 0.1 0.15 0.2 20 40 60 λ r= 2 0 20 40 60 0 0.05 0.1 0.15 0.2 λ r= 1.5 0 20 40 60 0 0.05 0.1 0.15 0.2 λ 0 20 0 0.05 0.1 0.15 0.2 λ 0 2 0 2 0 2 0 2 0 2 0 2 Figure 1: Spectral density of the Bethe Hessian for various values of the regularizer r on the stochastic block model. The red dots are the result of the direct diagonalization of the Bethe Hessian for a graph of 104 vertices with 2 clusters, with c=4, cin =7, cout =1. The black curves are the solutions to the recursion (15) for c = 4, obtained from population dynamics (with a population of size 105), see section 3. We isolated the two smallest eigenvalues, represented as small bars for convenience. The dashed black line marks the x=0 axis, and the inset is a zoom around this axis. At large value of r (top left) r=5, the Bethe Hessian is positive definite and all eigenvalues are positive. As r decays, the spectrum moves towards the x=0 axis. The smallest (non-informative) eigenvalue reaches zero for r = c = 4 (middle top), followed, as r decays further, by the second (informative) eigenvalue at r = (cin −cout)/2 = 3, which is the value of the second largest eigenvalue of B in this case [9] (top right). Finally, the bulk reaches 0 at rc =√c=2 (bottom left). At this point, the information is in the negative part, while the bulk is in the positive part. Interestingly, if r decays further (bottom middle and right) the bulk of the spectrum remains positive, but the informative eigenvalues blend back into the bulk. The best choice is thus to work at rc =√c=2. one must decide in a somehow ambiguous way which eigenvalues are relevant (outside the bulk) or not (inside the bulk). Here, on the contrary, no prior knowledge of the number of communities is needed. On more general graphs, we argue that the best choice for the regularizer is rc = p ρ(B), where ρ(B) is the spectral radius of the non-backtracking operator. We support this claim both numerically, on real world networks (sec. 4.2), and analytically (sec. 3). We also show that ρ(B) can be computed without building the matrix B itself, by efficiently solving a quadratic eigenproblem (sec. 2.1). The Bethe Hessian can be generalized straightforwardly to the weighed case: if the edge (i, j) carries a weight wij, then we can use the matrix ˜H(r) defined by ˜H(r)ij = δij 1 + X k∈∂i w2 ik r2 −w2 ik −rwijAij r2 −w2 ij , (3) where ∂i denotes the set of neighbors of vertex i. This is in fact the general expression of the Bethe Hessian of a certain weighted statistical model (see section 2.2). If all weights are equal to unity, ˜H reduces to (2) up to a trivial factor. Most of the arguments developed in the following generalize immediately to ˜H, including the relationship with the weighted non-backtracking operator, introduced in the conclusion of [9]. 2 Derivation and relation to previous works Our approach is connected to both the spectral algorithm using the non-backtracking matrix and to an Ising spin glass model. We now discuss these connections, and the properties of the Bethe Hessian operator along the way. 3 2.1 Relation with the non-backtracking matrix The non-backtracking operator of [9] is defined as a 2m×2m non-symmetric matrix indexed by the directed edges of the graph i →j Bi→j,k→l = δjk(1 −δil) . (4) The remarkable efficiency of the non-backtracking operator is due to the particular structure of its (complex) spectrum. For graphs generated by the SBM the spectrum decomposes into a bulk of uninformative eigenvalues sharply constrained when n →∞to the disk of radius p ρ(B), where ρ(B) is the spectral radius of B [20], well separated from the real, informative eigenvalues, that lie outside of this circle. It was also remarked that the number of real eigenvalues outside of the circle is the number of communities, when the graph was generated by the stochastic block model. More precisely, the presence of assortative communities yields real positive eigenvalues larger than p ρ(B), while the presence of disassortative communities yields real negative eigenvalues smaller than − p ρ(B). The authors of [9] showed that all eigenvalues λ of B that are different from ±1 are roots of the polynomial det [(λ2 −1)1 −λA + D] = det H(λ) . (5) This is known in graph theory as the Ihara-Bass formula for the graph zeta function. It provides the link between B and the (determinant of the) Bethe Hessian (already noticed in [23]): a real eigenvalue of B corresponds to a value of r such that the Bethe Hessian has a vanishing eigenvalue. For any finite n, when r is large enough, H(r) is positive definite. Then as r decreases, a new negative eigenvalue of H(r) appears when it crosses the zero axis, i.e whenever r is equal to a real positive eigenvalue λ of B. The null space of H(λ) is related to the corresponding eigenvector of B. Denoting (vi)1≤i≤n the eigenvector of H(λ) with eigenvalue 0, and (vi→j)(i,j)∈E the eigenvector of B with eigenvalue λ, we have [9]: vi = X k∈∂i vk→i . (6) Therefore the vector (vi)1≤i≤n is correlated with the community structure when (vi→j)(i,j)∈E is. The numerical experiments of section 4 show that when r = √c < λ, the eigenvector (vi)1≤i≤n corresponds to a strictly negative eigenvalue, and is even more correlated with the community structure than the eigenvector (vi→j)(i,j)∈E. This fact still lacks a proper theoretical understanding. We provide in section 2.2 a different, physical justification to the relevance of the “negative” eigenvectors of the Bethe Hessian for community detection. Of course, the same phenomenon takes place when increasing r from a large negative value. In order to translate all the informative eigenvalues of B into negative eigenvalues of H(r) we adopt rc = p ρ(B) . (7) since all the relevant eigenvalues of B are outside the circle of radius rc. On the other hand, H(r = 1) is the standard, positive-semidefinite, Laplacian so that for r < rc, the negative eigenvalues of H(r) move back into the positive part of the spectrum. This is consistent with the observation of [9] that the eigenvalues of B come in pairs having their product close to ρ(B), so that for each root λ > rc of (5), corresponding to the appearance of a new negative eigenvalue, there is another root λ′ ≃ρ(B)/λ < rc which we numerically found to correspond to the same eigenvalue becoming positive again. Let us stress that to compute ρ(B), we do not need to actually build the non-backtracking matrix. First, for large random networks of a given degree distribution, ρ(B) = ⟨d2⟩/⟨d⟩−1 [9], where ⟨d⟩ and ⟨d2⟩are the first and second moments of the degree distribution. In a more general setting, we can efficiently refine this initial guess by solving for the closest root of the quadratic eigenproblem defined by (5), e.g. using a standard SLP algorithm [19]. With the choice (7), the informative eigenvalues of B are in one-to-one correspondance with the union of negative eigenvalues of H(rc) and H(−rc). Because B has as many informative eigenvalues as there are (detectable) communities in the network [9], their number will therefore tell us the number of (detectable) communities in the graph, and we will use them to infer the community membership of the nodes, by using a standard clustering algorithm such as k-means. 4 2.2 Hessian of the Bethe free energy Let us define a pairwise Ising model on the graph G by the joint probability distribution: P({x}) = 1 Z exp X (i,j)∈E atanh 1 r xixj , (8) where {x} := {xi}i∈{1..n} ∈{±1}n is a set of binary random variables sitting on the nodes of the graph G. The regularizer r is here a parameter that controls the strength of the interaction between the variables: the larger |r| is, the weaker is the interaction. In order to study this model, a standard approach in machine learning is the Bethe approximation [21] in which the means ⟨xi⟩and moments ⟨xixj⟩are approximated by the parameters mi and ξij that minimize the so-called Bethe free energy FBethe({mi}, {ξij}) defined as FBethe({mi}, {ξij}) = − X (i,j)∈E atanh 1 r ξij + X (i,j)∈E X xi,xj η 1 + mixi + mjxj + ξijxixj 4 + X i∈V (1 −di) X xi η 1 + mixi 2 , (9) where η(x) := x ln x. Such approach allows for instance to derive the belief propagation (BP) algorithm. Here, however, we wish to restrict to a spectral one. At very high r the minimum of the Bethe free energy is given by the so-called paramagnetic point mi = 0, ξij = 1 r. It turns out [14] that mi = 0, ξij = 1 r is a stationarity point of the Bethe free energy for every r. Instead of considering the complete Bethe free energy, we will consider only its behavior around the paramagnetic point. This can be expressed via the Hessian (matrix of second derivatives), that has been studied extensively, see e.g. [14], [17]. At the paramagnetic point, the blocks of the Hessian involving one derivative with respect to the ξij are 0, and the block involving two such derivatives is a positive definite diagonal matrix [23]. We will therefore, somewhat improperly, call Hessian the matrix Hij(r) = ∂FBethe ∂mi∂mj mi=0,ξij= 1 r . (10) In particular, at the paramagnetic point: H(r) = 1 + D r2 −1 − rA r2 −1 = H(r) r2 −1 . (11) A more general expression of the Bethe Hessian in the case of weighted interactions atanh(wij/r) (with weights rescaled to be in [0, 1]) is given by eq. (3). All eigenvectors of H(r) and H(r) are the same, as are the eigenvalues up to a multiplicative, positive factor (since we consider only |r| > 1). The paramagnetic point is stable iff H(r) is positive definite. The appearance of each negative eigenvalue of the Hessian corresponds to a phase transition in the Ising model at which a new cluster (or a set of clusters) starts to be identifiable. The corresponding eigenvector will give the direction towards the cluster labeling. This motivates the use of the Bethe Hessian for spectral clustering. For tree-like graphs such as those generated by the SBM, model (8) can been studied analytically in the asymptotic limit n →∞. The location of the possible phase transitions in model (8) are also known from spin glass theory and the theory of phase transitions on random graphs (see e.g. [14, 5, 4, 17]). For positive r the trivial ferromagnetic phase appears at r = c, while the transitions towards the phases corresponding to the hidden community structure arise between √c<r <c. For disassortative communities, the situation is symmetric with r < −√c. Interestingly, at r = ±√c, the model undergoes a spin glass phase transition. At this point all the relevant eigenvalues have passed in the negative side (all the possible transitions from the paramagnetic states to the hidden structure have taken place) while the bulk of non-informative ones remains positive. This scenario is illustrated in Fig. 1 for the case of two assortative clusters. 3 The spectrum of the Bethe Hessian The spectral density of the Bethe Hessian can be computed analytically on tree-like graphs such as those generated by the stochastic block model. This will serve two goals: i) to justify independently 5 our choice for the value of the regularizer r and ii) to show that for all values of r, the bulk of uninformative eigenvalues remains in the positive region. The spectral density is defined by: ν(λ) = 1 n n X i=1 δ(λ −λi) , (12) where the λi’s are the eigenvalues of the Bethe Hessian. It can be shown [18] that it is also given by ν(λ) = 1 πn n X i=1 Im∆i(λ) , (13) where the ∆i are complex variables living on the vertices of the graph G, which are given by: ∆i = −λ + r2 + di −1 −r2X l∈∂i ∆l→i −1 , (14) where di is the degree of node i in the graph, and ∂i is the set of neighbors of i. The ∆i→j are the (linearly stable) solution of the following belief propagation recursion, or cavity method [13], ∆i→j = −λ + r2 + di −1 −r2 X l∈∂i\j ∆l→i −1 . (15) The ingredients to derive this formula are to turn the computation of the spectral density into a marginalization problem for a graphical model on the graph G, and then write the belief propagation equations to solve it. It can be shown [3] that this approach leads to an asymptotically exact description of the spectral density on random graphs such as those generated by the stochastic block model, which are locally tree-like in the limit where n →∞. We can solve equation (15) numerically using a population dynamics algorithm [13]: starting from a pool of variables, we iterate by drawing at each step a variable, its excess degree and its neighbors from the pool, and updating its value according to (15). The results are shown on Fig. 1: the bulk of the spectrum is always positive. We now justify analytically that the bulk of eigenvalues of the Bethe Hessian reaches 0 at r = p ρ(B). From (13) and (14), we see that if the linearly stable solution of (15) is real, then the corresponding spectral density will be equal to 0. We want to show that there exists an open set U ⊂R around 0 in which there exists a real, stable, solution to the BP recursion. Let us call ∆∈R2m, where m is the number of edges in G, the vector which components are the ∆i→j. We introduce the function F : (λ, ∆) ∈R2m+1 →F(λ, ∆) ∈R2m defined by F(λ, ∆)i→j = −λ + r2 + di −1 −r2 X l∈∂i\j ∆l→i − 1 ∆i→j , (16) so that equation (15) can be rewritten as F(λ, ∆) = 0 . (17) It is straightforward to check that when λ = 0, the assignment ∆i→j = 1/r2 is a real solution of (17). Furthermore, the Jacobian of F at this point reads JF (0, {1/r2}) = −1 0 ... r2(r21 −B) 0 , (18) where B is the 2m×2m non-backtracking operator and 1 is the 2m×2m identity matrix. The square submatrix of the Jacobian containing the derivatives with respect to the messages ∆i→j is therefore invertible whenever r > p ρ(B). From the continuous differentiability of F around (0, {1/r2}) and the implicit function theorem, there exists an open set V containing 0 such that for all λ ∈V , there exists ˜∆(λ) ∈R solution of (17) , and the function ˜∆is continuous in λ. To show that the spectral 6 density is indeed 0 in an open set around λ = 0, we need to show that this solution is linearly stable. Introducing the function Gλ : ∆∈R2m →Gλ(∆) ∈R2m defined by Gλ(∆)i→j = −λ + r2 + di −1 −r2 X l∈∂i\j ∆l→i −1 , (19) it is enough to show that the Jacobian of Gλ at the point ˜∆(λ) has all its eigenvalues smaller than 1 in modulus, for λ close to 0. But since JGλ(∆) is continuous in (λ, ∆) in the neighborhood of (0, ˜∆(0) = {1/r2}), and ˜∆(λ) is continuous in λ, it is enough to show that the spectral radius of JG0({1/r2}) is smaller than 1. We compute JG0({1/r2}) = 1 r2 B , (20) so that the spectral radius of JG0({1/r2}) is ρ(B)/r2, which is (strictly) smaller than 1 as long as r > p ρ(B). From the continuity of the eigenvalues of a matrix with respect to its entries, there exists an open set U ⊂V containing 0 such that ∀λ ∈U, the solution ˜∆of the BP recursion (15) is real, so that the corresponding spectral density in U is equal to 0. This proves that the bulk of the spectrum of H reaches 0 at r = rc = p ρ(B), further justifying our choice for the regularizer. 4 Numerical results 4.1 Synthetic networks We illustrate the efficiency of the algorithm for graphs generated by the stochastic block model. Fig. 2 shows the performance of standard spectral clustering methods, as well as that of the belief propagation (BP) algorithm of [4], believed to be asymptotically optimal in large tree-like graph. The performance is measured in terms of the overlap with the true labeling, defined as 1 N X u δgu,˜gu −1 q ! 1 −1 q , (21) where gu is the true group label of node u, and ˜gu is the label given by the algorithm, and we maximize over all q! possible permutation of the groups. The Bethe Hessian systematically outperforms B and does almost as well as BP, which is a more complicated algorithm, that we have run here assuming the knowledge of ”oracle parameters”: the number of communities, their sizes, and the matrix pab [5, 4]. The Bethe Hessian, on the other hand is non-parametric and infers the number of communities in the graph by counting the number of negative eigenvalues. 4.2 Real networks We finally turn towards actual real graphs to illustrate the performances of our approach, and to show that even if real networks are not generated by the stochastic block model, the Bethe Hessian operator remains a useful tool. In Table 1 we give the overlap and the number of groups to be identified. We limited our experiments to this list of networks because they have known, “ground true” clusters. For each case we observed a large correlation to the ground truth, and at least equal (and sometimes better) performances with respect to the non backtracking operator. The overlap was computed assuming knowledge of the number of ground true clusters. The number of clusters is correctly given by the number of negative eigenvalues of the Bethe Hessian in all the presented cases except for the political blogs network (10 predicted clusters) and the football network (10 predicted clusters). These differences either question the statistical significance of some of the human-decided labelling, or suggest the existence of additional relevant clusters. It is also interesting to note that our approach works not only in the assortative case but also in the disassortative ones, for instance for the word adjacency networks. A Matlab implementation to reproduce the results of the Bethe Hessian for both real and synthetic networks is provided as supplementary material. 5 Conclusion and perspectives We have presented here a new approach to spectral clustering using the Bethe Hessian and given evidence that this approach combines the advantages of standard sparse symmetric real matrices, with 7 3 4 5 0 0.2 0.4 0.6 0.8 1 cin −cout overlap q= 2 BH BANorm. Lap. BP -5 -4 -3 0 0.2 0.4 0.6 0.8 1 cin −cout q= 2 BH BANorm. Lap. BP 5 6 7 8 0 0.2 0.4 0.6 0.8 1 cin −cout q= 3 BH BANorm. Lap. BP Figure 2: Performance of spectral clustering applied to graphs of size n = 105 generated from the the stochastic block model. Each point is averaged over 20 such graphs. Left: assortative case with q = 2 clusters (theoretical transition at 3.46); middle: disassortative case with q = 2 (theoretical transition at -3.46); right: assortative case with q = 3 clusters (theoretical transition at 5.20). For q = 2, we clustered according to the signs of the components of the eigenvector corresponding to the second most negative eigenvalue of the Bethe Hessian operator. For q = 3, we used k-means on the 3 “negative” eigenvectors. While both the standard adjacency (A) and symmetrically normalized Laplacian (D−1/2(D−A)D−1/2) approaches fail to identify clusters in a large relevant region, both the non-backtracking (B) and the Bethe Hessian (BH) approaches identify clusters almost as well as using the more complicated belief propagation (BP) with oracle parameters. Note, however, that the Bethe Hessian systematically outperforms the non-backtracking operator, at a smaller computational cost. Additionally, clustering with the adjacency matrix and the normalized laplacian are run on the largest connected component, while the Bethe Hessian doesn’t require any kind of pre-processing of the graph. While our theory explains why clustering with the Bethe Hessian gives a positive overlap whenever clustering with B does, we currently don’t have an explanation as to why the Bethe Hessian overlap is actually larger. Table 1: Overlap for some commonly used benchmarks for community detection, computed using the signs of the second eigenvector for the networks with two communities, and using k-means for those with three and more communities, compared to the man-made group assignment. The non-backtracking operator detects communities in all these networks, with an overlap comparable to the performance of other spectral methods. The Bethe Hessian systematically either equals or outperforms the results obtained by the non-backtracking operator. PART Non-backtracking [9] Bethe Hessian Polbooks (q = 3) [1] 0.742857 0.757143 Polblogs (q = 2) [10] 0.864157 0.865794 Karate (q = 2) [24] 1 1 Football (q = 12) [6] 0.924111 0.924111 Dolphins (q = 2) [16] 0.741935 0.806452 Adjnoun (q = 2) [8] 0.625000 0.660714 the performances of the more involved non-backtracking operator, or the use of the belief propagation algorithm with oracle parameters. Advantages over other spectral methods are that the number of negative eigenvalues provides an estimate of the number of clusters, there is a well-defined way to set the parameter r, making the algorithm tuning-parameter free, and it is guaranteed to detect the communities generated from the stochastic block model down to the theoretical limit. This answers the quest for a tractable non-parametric approach that performs optimally in the stochastic block model. Given the large impact and the wide use of spectral clustering methods in many fields of modern science, we thus expect that our method will have a significant impact on data analysis. 8 References [1] L. A Adamic and N. Glance. The political blogosphere and the 2004 us election: divided they blog. In Proceedings of the 3rd international workshop on Link discovery, page 36. ACM, 2005. [2] P. J Bickel and A. Chen. A nonparametric view of network models and newman–girvan and other modularities. Proceedings of the National Academy of Sciences, 106(50):21068, 2009. [3] Charles Bordenave and Marc Lelarge. Resolvent of large random graphs. Random Structures and Algorithms, 37(3):332–352, 2010. [4] A. Decelle, F. Krzakala, C. Moore, and L. Zdeborov´a. Asymptotic analysis of the stochastic block model for modular networks and its algorithmic applications. Phys. Rev. E, 84(6):066106, 2011. [5] A. Decelle, F. Krzakala, C. Moore, and L. Zdeborov´a. Inference and phase transitions in the detection of modules in sparse networks. Phys. Rev. Lett., 107(6):065701, 2011. [6] Michelle Girvan and Mark EJ Newman. Community structure in social and biological networks. Proceedings of the National Academy of Sciences, 99(12):7821–7826, 2002. [7] Paul W. Holland, Kathryn Blackmond Laskey, and Samuel Leinhardt. Stochastic blockmodels: First steps. Social Networks, 5(2):109, 1983. [8] Valdis Krebs. The network can be found on http://www.orgnet.com/. [9] F. Krzakala, C. Moore, E. Mossel, J. Neeman, A. Sly, L. Zdeborov´a, and P. Zhang. Spectral redemption in clustering sparse networks. Proceedings of the National Academy of Sciences, 110(52):20935–20940, 2013. [10] D. Lusseau, K. Schneider, O. J. Boisseau, P. Haase, E. Slooten, and S. M Dawson. The bottlenose dolphin community of doubtful sound features a large proportion of long-lasting associations. Behavioral Ecology and Sociobiology, 54(4):396–405, 2003. [11] Ulrike Luxburg. A tutorial on spectral clustering. Statistics and Computing, 17(4):395, 2007. [12] Laurent Massoulie. Community detection thresholds and the weak ramanujan property. arXiv preprint arXiv:1311.3085, 2013. [13] M. Mezard and A. Montanari. Information, Physics, and Computation. Oxford University Press, 2009. [14] Joris M Mooij, Hilbert J Kappen, et al. Validity estimates for loopy belief propagation on binary real-world networks. In NIPS, 2004. [15] Elchanan Mossel, Joe Neeman, and Allan Sly. A proof of the block model threshold conjecture. arXiv preprint arXiv:1311.4115, 2013. [16] Mark EJ Newman. Finding community structure in networks using the eigenvectors of matrices. Phys. Rev. E, 74(3):036104, 2006. [17] F. Ricci-Tersenghi. The bethe approximation for solving the inverse ising problem: a comparison with other inference methods. J. Stat. Mech.: Th. and Exp., page P08015, 2012. [18] Tim Rogers, Isaac P´erez Castillo, Reimer K¨uhn, and Koujin Takeda. Cavity approach to the spectral density of sparse symmetric random matrices. Phys. Rev. E, 78(3):031116, 2008. [19] Axel Ruhe. Algorithms for the nonlinear eigenvalue problem. SIAM Journal on Numerical Analysis, 10(4):674–689, 1973. [20] Alaa Saade, Florent Krzakala, and Lenka Zdeborov´a. Spectral density of the non-backtracking operator on random graphs. EPL, 107(5):50005, 2014. [21] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1, 2008. [22] Yuchung J Wang and George Y Wong. Stochastic blockmodels for directed graphs. Journal of the American Statistical Association, 82(397):8–19, 1987. [23] Yusuke Watanabe and Kenji Fukumizu. Graph zeta function in the bethe free energy and loopy belief propagation. In NIPS, pages 2017–2025, 2009. [24] W Zachary. An information flow model for conflict and fission in small groups1. Journal of anthropological research, 33(4):452–473, 1977. 9
|
2014
|
147
|
5,233
|
Optimal Regret Minimization in Posted-Price Auctions with Strategic Buyers Mehryar Mohri Courant Institute and Google Research 251 Mercer Street New York, NY 10012 mohri@cims.nyu.edu Andres Mu˜noz Medina Courant Institute 251 Mercer Street New York, NY 10012 munoz@cims.nyu.edu Abstract We study revenue optimization learning algorithms for posted-price auctions with strategic buyers. We analyze a very broad family of monotone regret minimization algorithms for this problem, which includes the previously best known algorithm, and show that no algorithm in that family admits a strategic regret more favorable than Ω( √ T). We then introduce a new algorithm that achieves a strategic regret differing from the lower bound only by a factor in O(log T), an exponential improvement upon the previous best algorithm. Our new algorithm admits a natural analysis and simpler proofs, and the ideas behind its design are general. We also report the results of empirical evaluations comparing our algorithm with the previous state of the art and show a consistent exponential improvement in several different scenarios. 1 Introduction Auctions have long been an active area of research in Economics and Game Theory [Vickrey, 2012, Milgrom and Weber, 1982, Ostrovsky and Schwarz, 2011]. In the past decade, however, the advent of online advertisement has prompted a more algorithmic study of auctions, including the design of learning algorithms for revenue maximization for generalized second-price auctions or second-price auctions with reserve [Cesa-Bianchi et al., 2013, Mohri and Mu˜noz Medina, 2014, He et al., 2013]. These studies have been largely motivated by the widespread use of AdExchanges and the vast amount of historical data thereby collected – AdExchanges are advertisement selling platforms using second-price auctions with reserve price to allocate advertisement space. Thus far, the learning algorithms proposed for revenue maximization in these auctions critically rely on the assumption that the bids, that is, the outcomes of auctions, are drawn i.i.d. according to some unknown distribution. However, this assumption may not hold in practice. In particular, with the knowledge that a revenue optimization algorithm is being used, an advertiser could seek to mislead the publisher by under-bidding. In fact, consistent empirical evidence of strategic behavior by advertisers has been found by Edelman and Ostrovsky [2007]. This motivates the analysis presented in this paper of the interactions between sellers and strategic buyers, that is, buyers that may act non-truthfully with the goal of maximizing their surplus. The scenario we consider is that of posted-price auctions, which, albeit simpler than other mechanisms, in fact matches a common situation in AdExchanges where many auctions admit a single bidder. In this setting, second-price auctions with reserve are equivalent to posted-price auctions: a seller sets a reserve price for a good and the buyer decides whether or not to accept it (that is to bid higher than the reserve price). In order to capture the buyer’s strategic behavior, we will analyze an online scenario: at each time t, a price pt is offered by the seller and the buyer must decide to either accept it or leave it. This scenario can be modeled as a two-player repeated non-zero sum game with 1 incomplete information, where the seller’s objective is to maximize his revenue, while the advertiser seeks to maximize her surplus as described in more detail in Section 2. The literature on non-zero sum games is very rich [Nachbar, 1997, 2001, Morris, 1994], but much of the work in that area has focused on characterizing different types of equilibria, which is not directly relevant to the algorithmic questions arising here. Furthermore, the problem we consider admits a particular structure that can be exploited to design efficient revenue optimization algorithms. From the seller’s perspective, this game can also be viewed as a bandit problem [Kuleshov and Precup, 2010, Robbins, 1985] since only the revenue (or reward) for the prices offered is accessible to the seller. Kleinberg and Leighton [2003] precisely studied this continuous bandit setting under the assumption of an oblivious buyer, that is, one that does not exploit the seller’s behavior (more precisely, the authors assume that at each round the seller interacts with a different buyer). The authors presented a tight regret bound of Θ(log log T) for the scenario of a buyer holding a fixed valuation and a regret bound of O(T 2 3 ) when facing an adversarial buyer by using an elegant reduction to a discrete bandit problem. However, as argued by Amin et al. [2013], when dealing with a strategic buyer, the usual definition of regret is no longer meaningful. Indeed, consider the following example: let the valuation of the buyer be given by v ∈[0, 1] and assume that an algorithm with sublinear regret such as Exp3 [Auer et al., 2002b] or UCB [Auer et al., 2002a] is used for T rounds by the seller. A possible strategy for the buyer, knowing the seller’s algorithm, would be to accept prices only if they are smaller than some small value ϵ, certain that the seller would eventually learn to offer only prices less than ϵ. If ϵ ≪v, the buyer would considerably boost her surplus while, in theory, the seller would have not incurred a large regret since in hindsight, the best fixed strategy would have been to offer price ϵ for all rounds. This, however is clearly not optimal for the seller. The stronger notion of policy regret introduced by Arora et al. [2012] has been shown to be the appropriate one for the analysis of bandit problems with adaptive adversaries. However, for the example just described, a sublinear policy regret can be similarly achieved. Thus, this notion of regret is also not the pertinent one for the study of our scenario. We will adopt instead the definition of strategic-regret, which was introduced by Amin et al. [2013] precisely for the study of this problem. This notion of regret also matches the concept of learning loss introduced by [Agrawal, 1995] when facing an oblivious adversary. Using this definition, Amin et al. [2013] presented both upper and lower bounds for the regret of a seller facing a strategic buyer and showed that the buyer’s surplus must be discounted over time in order to be able to achieve sublinear regret (see Section 2). However, the gap between the upper and lower bounds they presented is in O( √ T). In the following, we analyze a very broad family of monotone regret minimization algorithms for this problem (Section 3), which includes the algorithm of Amin et al. [2013], and show that no algorithm in that family admits a strategic regret more favorable than Ω( √ T). Next, we introduce a nearly-optimal algorithm that achieves a strategic regret differing from the lower bound at most by a factor in O(log T) (Section 4). This represents an exponential improvement upon the existing best algorithm for this setting. Our new algorithm admits a natural analysis and simpler proofs. A key idea behind its design is a method deterring the buyer from lying, that is rejecting prices below her valuation. 2 Setup We consider the following game played by a buyer and a seller. A good, such as an advertisement space, is repeatedly offered for sale by the seller to the buyer over T rounds. The buyer holds a private valuation v ∈[0, 1] for that good. At each round t = 1, . . . , T, a price pt is offered by the seller and a decision at ∈{0, 1} is made by the buyer. at takes value 1 when the buyer accepts to buy at that price, 0 otherwise. We will say that a buyer lies whenever at = 0 while pt < v. At the beginning of the game, the algorithm A used by the seller to set prices is announced to the buyer. Thus, the buyer plays strategically against this algorithm. The knowledge of A is a standard assumption in mechanism design and also matches the practice in AdExchanges. For any γ ∈(0, 1), define the discounted surplus of the buyer as follows: Sur(A, v) = T X t=1 γt−1at(v −pt). (1) 2 The value of the discount factor γ indicates the strength of the preference of the buyer for current surpluses versus future ones. The performance of a seller’s algorithm is measured by the notion of strategic-regret [Amin et al., 2013] defined as follows: Reg(A, v) = Tv − T X t=1 atpt. (2) The buyer’s objective is to maximize his discounted surplus, while the seller seeks to minimize his regret. Note that, in view of the discounting factor γ, the buyer is not fully adversarial. The problem consists of designing algorithms achieving sublinear strategic regret (that is a regret in o(T)). The motivation behind the definition of strategic-regret is straightforward: a seller, with access to the buyer’s valuation, can set a fixed price for the good ϵ close to this value. The buyer, having no control on the prices offered, has no option but to accept this price in order to optimize his utility. The revenue per round of the seller is therefore v−ϵ. Since there is no scenario where higher revenue can be achieved, this is a natural setting to compare the performance of our algorithm. To gain more intuition about the problem, let us examine some of the complications arising when dealing with a strategic buyer. Suppose the seller attempts to learn the buyer’s valuation v by performing a binary search. This would be a natural algorithm when facing a truthful buyer. However, in view of the buyer’s knowledge of the algorithm, for γ ≫0, it is in her best interest to lie on the initial rounds, thereby quickly, in fact exponentially, decreasing the price offered by the seller. The seller would then incur an Ω(T) regret. A binary search approach is therefore “too aggressive”. Indeed, an untruthful buyer can manipulate the seller into offering prices less than v/2 by lying about her value even just once! This discussion suggests following a more conservative approach. In the next section, we discuss a natural family of conservative algorithms for this problem. 3 Monotone algorithms The following conservative pricing strategy was introduced by Amin et al. [2013]. Let p1 = 1 and β < 1. If price pt is rejected at round t, the lower price pt+1 = βpt is offered at the next round. If at any time price pt is accepted, then this price is offered for all the remaining rounds. We will denote this algorithm by monotone. The motivation behind its design is clear: for a suitable choice of β, the seller can slowly decrease the prices offered, thereby pressing the buyer to reject many prices (which is not convenient for her) before obtaining a favorable price. The authors present an O(Tγ √ T) regret bound for this algorithm, with Tγ = 1/(1 −γ). A more careful analysis shows that this bound can be further tightened to O( p TγT + √ T) when the discount factor γ is known to the seller. Despite its sublinear regret, the monotone algorithm remains sub-optimal for certain choices of γ. Indeed, consider a scenario with γ ≪1. For this setting, the buyer would no longer have an incentive to lie, thus, an algorithm such as binary search would achieve logarithmic regret, while the regret achieved by the monotone algorithm is only guaranteed to be in O( √ T). One may argue that the monotone algorithm is too specific since it admits a single parameter β and that perhaps a more complex algorithm with the same monotonic idea could achieve a more favorable regret. Let us therefore analyze a generic monotone algorithm Am defined by Algorithm 1. Definition 1. For any buyer’s valuation v ∈[0, 1], define the acceptance time κ∗= κ∗(v) as the first time a price offered by the seller using algorithm Am is accepted. Proposition 1. For any decreasing sequence of prices (pt)T t=1, there exists a truthful buyer with valuation v0 such that algorithm Am suffers regret of at least Reg(Am, v0) ≥1 4 q T − √ T. Proof. By definition of the regret, we have Reg(Am, v) = vκ∗+ (T −κ∗)(v −pκ∗). We can consider two cases: κ∗(v0) > √ T for some v0 ∈[1/2, 1] and κ∗(v) ≤ √ T for every v ∈[1/2, 1]. In the former case, we have Reg(Am, v0) ≥v0 √ T ≥1 2 √ T, which implies the statement of the proposition. Thus, we can assume the latter condition. 3 Algorithm 1 Family of monotone algorithms. Let p1 = 1 and pt ≤pt−1 for t = 2, . . . T. t ←1 p ←pt Offer price p while (Buyer rejects p) and (t < T) do t ←t + 1 p ←pt Offer price p end while while (t < T) do t ←t + 1 Offer price p end while Algorithm 2 Definition of Ar. n = the root of T (T) while Offered prices less than T do Offer price pn if Accepted then n = r(n) else Offer price pn for r rounds n = l(n) end if end while Let v be uniformly distributed over [ 1 2, 1]. In view of Lemma 4 (see Appendix 8.1), we have E[vκ∗] + E[(T −κ∗)(v −pκ∗)] ≥1 2E[κ∗] + (T − √ T)E[(v −pκ∗)] ≥1 2E[κ∗] + T − √ T 32E[κ∗] . The right-hand side is minimized for E[κ∗] = √ T − √ T 4 . Plugging in this value yields E[Reg(Am, v)] ≥ √ T − √ T 4 , which implies the existence of v0 with Reg(Am, v0) ≥ √ T − √ T 4 . We have thus shown that any monotone algorithm Am suffers a regret of at least Ω( √ T), even when facing a truthful buyer. A tighter lower bound can be given under a mild condition on the prices offered. Definition 2. A sequence (pt)T t=1 is said to be convex if it verifies pt −pt+1 ≥pt+1 −pt+2 for t = 1, . . . , T −2. An instance of a convex sequence is given by the prices offered by the monotone algorithm. A seller offering prices forming a decreasing convex sequence seeks to control the number of lies of the buyer by slowly reducing prices. The following proposition gives a lower bound on the regret of any algorithm in this family. Proposition 2. Let (pt)T t=1 be a decreasing convex sequence of prices. There exists a valuation v0 for the buyer such that the regret of the monotone algorithm defined by these prices is Ω( p TCγ + √ T), where Cγ = γ 2(1−γ). The full proof of this proposition is given in Appendix 8.1. The proposition shows that when the discount factor γ is known, the monotone algorithm is in fact asymptotically optimal in its class. The results just presented suggest that the dependency on T cannot be improved by any monotone algorithm. In some sense, this family of algorithms is “too conservative”. Thus, to achieve a more favorable regret guarantee, an entirely different algorithmic idea must be introduced. In the next section, we describe a new algorithm that achieves a substantially more advantageous strategic regret by combining the fast convergence properties of a binary search-type algorithm (in a truthful setting) with a method penalizing untruthful behaviors of the buyer. 4 A nearly optimal algorithm Let A be an algorithm for revenue optimization used against a truthful buyer. Denote by T (T) the tree associated to A after T rounds. That is, T (T) is a full tree of height T with nodes n ∈T (T) labeled with the prices pn offered by A. The right and left children of n are denoted by r(n) and l(n) respectively. The price offered when pn is accepted by the buyer is the label of r(n) while the price offered by A if pn is rejected is the label of l(n). Finally, we will denote the left and right subtrees rooted at node n by L (n) and R(n) respectively. Figure 1 depicts the tree generated by an algorithm proposed by Kleinberg and Leighton [2003], which we will describe later. 4 1/2 1/4 3/4 1/16 5/16 9/16 13/16 1/2 1/4 3/4 13/16 (a) (b) Figure 1: (a) Tree T (3) associated to the algorithm proposed in [Kleinberg and Leighton, 2003]. (b) Modified tree T ′(3) with r = 2. Since the buyer holds a fixed valuation, we will consider algorithms that increase prices only after a price is accepted and decrease it only after a rejection. This is formalized in the following definition. Definition 3. An algorithm A is said to be consistent if maxn′∈L (n) pn′ ≤pn ≤minn′∈R(n) pn′ for any node n ∈T (T). For any consistent algorithm A, we define a modified algorithm Ar, parametrized by an integer r ≥1, designed to face strategic buyers. Algorithm Ar offers the same prices as A, but it is defined with the following modification: when a price is rejected by the buyer, the seller offers the same price for r rounds. The pseudocode of Ar is given in Algorithm 2. The motivation behind the modified algorithm is given by the following simple observation: a strategic buyer will lie only if she is certain that rejecting a price will boost her surplus in the future. By forcing the buyer to reject a price for several rounds, the seller ensures that the future discounted surplus will be negligible, thereby coercing the buyer to be truthful. We proceed to formally analyze algorithm Ar. In particular, we will quantify the effect of the parameter r on the choice of the buyer’s strategy. To do so, a measure of the spread of the prices offered by Ar is needed. Definition 4. For any node n ∈T (T) define the right increment of n as δr n := pr(n) −pn. Similarly, define its left increment to be δl n := maxn′∈L (n) pn −pn′. The prices offered by Ar define a path in T (T). For each node in this path, we can define time t(n) to be the number of rounds needed for this node to be reached by Ar. Note that, since r may be greater than 1, the path chosen by Ar might not necessarily reach the leaves of T (T). Finally, let S : n 7→S(n) be the function representing the surplus obtained by the buyer when playing an optimal strategy against Ar after node n is reached. Lemma 1. The function S satisfies the following recursive relation: S(n) = max(γt(n)−1(v −pn) + S(r(n)), S(l(n))). (3) Proof. Define a weighted tree T ′(T) ⊂T (T) of nodes reachable by algorithm Ar. We assign weights to the edges in the following way: if an edge on T ′(T) is of the form (n, r(n)), its weight is set to be γt(n)−1(v −pn), otherwise, it is set to 0. It is easy to see that the function S evaluates the weight of the longest path from node n to the leafs of T ′(T). It thus follows from elementary graph algorithms that equation (3) holds. The previous lemma immediately gives us necessary conditions for a buyer to reject a price. Proposition 3. For any reachable node n, if price pn is rejected by the buyer, then the following inequality holds: v −pn < γr (1 −γ)(1 −γr)(δl n + γδr n). Proof. A direct implication of Lemma 1 is that price pn will be rejected by the buyer if and only if γt(n)−1(v −pn) + S(r(n)) < S(l(n)). (4) 5 However, by definition, the buyer’s surplus obtained by following any path in R(n) is bounded above by S(r(n)). In particular, this is true for the path which rejects pr(n) and accepts every price afterwards. The surplus of this path is given by PT t=t(n)+r+1 γt−1(v −bpt) where (bpt)T t=t(n)+r+1 are the prices the seller would offer if price pr(n) were rejected. Furthermore, since algorithm Ar is consistent, we must have bpt ≤pr(n) = pn + δr n. Therefore, S(r(n)) can be bounded as follows: S(r(n)) ≥ T X t=t(n)+r+1 γt−1(v −pn −δr n) = γt(n)+r −γT 1 −γ (v −pn −δr n). (5) We proceed to upper bound S(l(n)). Since pn −p′ n ≤δl n for all n′ ∈L (n), v −pn′ ≤v −pn + δl n and S(l(n)) ≤ T X t=tn+r γt−1(v −pn + δl n) = γt(n)+r−1 −γT 1 −γ (v −pn + δl n). (6) Combining inequalities (4), (5) and (6) we conclude that γt(n)−1(v −pn) + γt(n)+r −γT 1 −γ (v −pn −δr n) ≤γt(n)+r−1 −γT 1 −γ (v −pn + δl n) ⇒ (v −pn) 1 + γr+1 −γr 1 −γ ≤γrδl n + γr+1δr n −γT −t(n)+1(δr n + δl n) 1 −γ ⇒ (v −pn)(1 −γr) ≤γr(δl n + γδr n) 1 −γ . Rearranging the terms in the above inequality yields the desired result. Let us consider the following instantiation of algorithm A introduced in [Kleinberg and Leighton, 2003]. The algorithm keeps track of a feasible interval [a, b] initialized to [0, 1] and an increment parameter ϵ initialized to 1/2. The algorithm works in phases. Within each phase, it offers prices a + ϵ, a + 2ϵ, . . . until a price is rejected. If price a + kϵ is rejected, then a new phase starts with the feasible interval set to [a + (k −1)ϵ, a + kϵ] and the increment parameter set to ϵ2. This process continues until b −a < 1/T at which point the last phase starts and price a is offered for the remaining rounds. It is not hard to see that the number of phases needed by the algorithm is less than ⌈log2 log2 T⌉+1. A more surprising fact is that this algorithm has been shown to achieve regret O(log log T) when the seller faces a truthful buyer. We will show that the modification Ar of this algorithm admits a particularly favorable regret bound. We will call this algorithm PFSr (penalized fast search algorithm). Proposition 4. For any value of v ∈[0, 1] and any γ ∈(0, 1), the regret of algorithm PFSr admits the following upper bound: Reg(PFSr, v) ≤(vr + 1)(⌈log2 log2 T⌉+ 1) + (1 + γ)γrT 2(1 −γ)(1 −γr). (7) Note that for r = 1 and γ →0 the upper bound coincides with that of [Kleinberg and Leighton, 2003]. Proof. Algorithm PFSr can accumulate regret in two ways: the price offered pn is rejected, in which case the regret is v, or the price is accepted and its regret is v −pn. Let K = ⌈log2 log2 T⌉+ 1 be the number of phases run by algorithm PFSr. Since at most K different prices are rejected by the buyer (one rejection per phase) and each price must be rejected for r rounds, the cumulative regret of all rejections is upper bounded by vKr. The second type of regret can also be bounded straightforwardly. For any phase i, let ϵi and [ai, bi] denote the corresponding search parameter and feasible interval respectively. If v ∈[ai, bi], the regret accrued in the case where the buyer accepts a price in this interval is bounded by bi−ai = √ϵi. If, on the other hand v ≥bi, then it readily follows that v −pn < v −bi + √ϵi for all prices pn offered in phase i. Therefore, the regret obtained in acceptance rounds is bounded by K X i=1 Ni (v −bi)1v>bi + √ϵi ≤ K X i=1 (v −bi)1v>biNi + K, 6 where Ni ≤ 1 √ϵi denotes the number of prices offered during the i-th round. Finally, notice that, in view of the algorithm’s definition, every bi corresponds to a rejected price. Thus, by Proposition 3, there exist nodes ni (not necessarily distinct) such that pni = bi and v −bi = v −pni ≤ γr (1 −γ)(1 −γr)(δl ni + γδr ni). It is immediate that δr n ≤1/2 and δl n ≤1/2 for any node n, thus, we can write K X i=1 (v −bi)1v>biNi ≤ γr(1 + γ) 2(1 −γ)(1 −γr) K X i=1 Ni ≤ γr(1 + γ) 2(1 −γ)(1 −γr)T. The last inequality holds since at most T prices are offered by our algorithm. Combining the bounds for both regret types yields the result. When an upper bound on the discount factor γ is known to the seller, he can leverage this information and optimize upper bound (7) with respect to the parameter r. Theorem 1. Let 1/2 < γ < γ0 < 1 and r∗= l argminr≥1 r + γr 0 T (1−γ0)(1−γr 0 ) m . For any v ∈[0, 1], if T > 4, the regret of PFSr∗satisfies Reg(PFSr∗, v) ≤(2vγ0Tγ0 log cT + 1 + v)(log2 log2 T + 1) + 4Tγ0, where c = 4 log 2. The proof of this theorem is fairly technical and is deferred to the Appendix. The theorem helps us define conditions under which logarithmic regret can be achieved. Indeed, if γ0 = e−1/ log T = O(1 − 1 log T ), using the inequality e−x ≤1 −x + x2/2 valid for all x > 0 we obtain 1 1 −γ0 ≤ log2 T 2 log T −1 ≤log T. It then follows from Theorem 1 that Reg(PFSr∗, v) ≤(2v log T log cT + 1 + v)(log2 log2 T + 1) + 4 log T. Let us compare the regret bound given by Theorem 1 with the one given by Amin et al. [2013]. The above discussion shows that for certain values of γ, an exponentially better regret can be achieved by our algorithm. It can be argued that the knowledge of an upper bound on γ is required, whereas this is not needed for the monotone algorithm. However, if γ > 1 −1/ √ T, the regret bound on monotone is super-linear, and therefore uninformative. Thus, in order to properly compare both algorithms, we may assume that γ < 1 −1/ √ T in which case, by Theorem 1, the regret of our algorithm is O( √ T log T) whereas only linear regret can be guaranteed by the monotone algorithm. Even under the more favorable bound of O( p TγT + √ T), for any α < 1 and γ < 1 −1/T α, the monotone algorithm will achieve regret O(T α+1 2 ) while a strictly better regret O(T α log T log log T) is attained by ours. 5 Lower bound The following lower bounds have been derived in previous work. Theorem 2 ([Amin et al., 2013]). Let γ > 0 be fixed. For any algorithm A, there exists a valuation v for the buyer such that Reg(A, v) ≥ 1 12Tγ. This theorem is in fact given for the stochastic setting where the buyer’s valuation is a random variable taken from some fixed distribution D. However, the proof of the theorem selects D to be a point mass, therefore reducing the scenario to a fixed priced setting. Theorem 3 ( [Kleinberg and Leighton, 2003]). Given any algorithm A to be played against a truthful buyer, there exists a value v ∈[0, 1] such that Reg(A, v) ≥C log log T for some universal constant C. 7 γ = .85, v = .75 γ = .95, v = .75 γ = .75, v = .25 γ = .80, v = .25 0 200 400 600 800 1000 1200 2 2.5 3 3.5 4 4.5 Regret Number of rounds (log-scale) PFS mon 0 500 1000 1500 2000 2500 2 2.5 3 3.5 4 4.5 Regret Number of rounds (log-scale) PFS mon 0 20 40 60 80 100 120 2 2.5 3 3.5 4 4.5 Regret Number of rounds (log-scale) PFS mon 0 20 40 60 80 100 120 2 2.5 3 3.5 4 4.5 Regret Number of rounds (log-scale) PFS mon Figure 2: Comparison of the monotone algorithm and PFSr for different choices of γ and v. The regret of each algorithm is plotted as a function of the number rounds when γ is not known to the algorithms (first two figures) and when its value is made accessible to the algorithms (last two figures). Combining these results leads immediately to the following. Corollary 1. Given any algorithm A, there exists a buyer’s valuation v ∈[0, 1] such that Reg(A, v) ≥max 1 12Tγ, C log log T , for a universal constant C. We now compare the upper bounds given in the previous section with the bound of Corollary 1. For γ > 1/2, we have Reg(PFSr, v) = O(Tγ log T log log T). On the other hand, for γ ≤1/2, we may choose r = 1, in which case, by Proposition 4, Reg(PFSr, v) = O(log log T). Thus, the upper and lower bounds match up to an O(log T) factor. 6 Empirical results In this section, we present the result of simulations comparing the monotone algorithm and our algorithm PFSr. The experiments were carried out as follows: given a buyer’s valuation v, a discrete set of false valuations bv were selected out of the set {.03, .06, . . . , v}. Both algorithms were run against a buyer making the seller believe her valuation is bv instead of v. The value of bv achieving the best utility for the buyer was chosen and the regret for both algorithms is reported in Figure 2. We considered two sets of experiments. First, the value of parameter γ was left unknown to both algorithms and the value of r was set to log(T). This choice is motivated by the discussion following Theorem 1 since, for large values of T, we can expect to achieve logarithmic regret. The first two plots (from left to right) in Figure 2 depict these results. The apparent stationarity in the regret of PFSr is just a consequence of the scale of the plots as the regret is in fact growing as log(T). For the second set of experiments, we allowed access to the parameter γ to both algorithms. The value of r was chosen optimally based on the results of Theorem 1 and the parameter β of monotone was set to 1 −1/ p TTγ to ensure regret in O( p TTγ + √ T). It is worth noting that even though our algorithm was designed under the assumption of some knowledge about the value of γ, the experimental results show that an exponentially better performance over the monotone algorithm is still attainable and in fact the performances of the optimized and unoptimized versions of our algorithm are comparable. A more comprehensive series of experiments is presented in Appendix 9. 7 Conclusion We presented a detailed analysis of revenue optimization algorithms against strategic buyers. In doing so, we reduced the gap between upper and lower bounds on strategic regret to a logarithmic factor. Furthermore, the algorithm we presented is simple to analyze and reduces to the truthful scenario in the limit of γ →0, an important property that previous algorithms did not admit. We believe that our analysis helps gain a deeper understanding of this problem and that it can serve as a tool for studying more complex scenarios such as that of strategic behavior in repeated second-price auctions, VCG auctions and general market strategies. Acknowledgments We thank Kareem Amin, Afshin Rostamizadeh and Umar Syed for several discussions about the topic of this paper. This work was partly funded by the NSF award IIS-1117591. 8 References R. Agrawal. The continuum-armed bandit problem. SIAM journal on control and optimization, 33 (6):1926–1951, 1995. K. Amin, A. Rostamizadeh, and U. Syed. Learning prices for repeated auctions with strategic buyers. In Proceedings of NIPS, pages 1169–1177, 2013. R. Arora, O. Dekel, and A. Tewari. Online bandit learning against an adaptive adversary: from regret to policy regret. In Proceedings of ICML, 2012. P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem. Machine Learning, 47(2-3):235–256, 2002a. P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire. The nonstochastic multiarmed bandit problem. SIAM J. Comput., 32(1):48–77, 2002b. N. Cesa-Bianchi, C. Gentile, and Y. Mansour. Regret minimization for reserve prices in second-price auctions. In Proceedings of SODA, pages 1190–1204, 2013. B. Edelman and M. Ostrovsky. Strategic bidder behavior in sponsored search auctions. Decision Support Systems, 43(1), 2007. D. He, W. Chen, L. Wang, and T. Liu. A game-theoretic machine learning approach for revenue maximization in sponsored search. In Proceedings of IJCAI, pages 206–213, 2013. R. D. Kleinberg and F. T. Leighton. The value of knowing a demand curve: Bounds on regret for online posted-price auctions. In Proceedings of FOCS, pages 594–605, 2003. V. Kuleshov and D. Precup. Algorithms for the multi-armed bandit problem. Journal of Machine Learning, 2010. P. Milgrom and R. Weber. A theory of auctions and competitive bidding. Econometrica: Journal of the Econometric Society, pages 1089–1122, 1982. M. Mohri and A. Mu˜noz Medina. Learning theory and algorithms for revenue optimization in second-price auctions with reserve. In Proceedings of ICML, 2014. P. Morris. Non-zero-sum games. In Introduction to Game Theory, pages 115–147. Springer, 1994. J. Nachbar. Bayesian learning in repeated games of incomplete information. Social Choice and Welfare, 18(2):303–326, 2001. J. H. Nachbar. Prediction, optimization, and learning in repeated games. Econometrica: Journal of the Econometric Society, pages 275–309, 1997. M. Ostrovsky and M. Schwarz. Reserve prices in internet advertising auctions: A field experiment. In Proceedings of EC, pages 59–60. ACM, 2011. H. Robbins. Some aspects of the sequential design of experiments. In Herbert Robbins Selected Papers, pages 169–177. Springer, 1985. W. Vickrey. Counterspeculation, auctions, and competitive sealed tenders. The Journal of finance, 16(1):8–37, 2012. 9
|
2014
|
148
|
5,234
|
Fundamental Limits of Online and Distributed Algorithms for Statistical Learning and Estimation Ohad Shamir Weizmann Institute of Science ohad.shamir@weizmann.ac.il Abstract Many machine learning approaches are characterized by information constraints on how they interact with the training data. These include memory and sequential access constraints (e.g. fast first-order methods to solve stochastic optimization problems); communication constraints (e.g. distributed learning); partial access to the underlying data (e.g. missing features and multi-armed bandits) and more. However, currently we have little understanding how such information constraints fundamentally affect our performance, independent of the learning problem semantics. For example, are there learning problems where any algorithm which has small memory footprint (or can use any bounded number of bits from each example, or has certain communication constraints) will perform worse than what is possible without such constraints? In this paper, we describe how a single set of results implies positive answers to the above, for several different settings. 1 Introduction Information constraints play a key role in machine learning. Of course, the main constraint is the availability of only a finite data set to learn from. However, many current problems in machine learning can be characterized as learning with additional information constraints, arising from the manner in which the learner may interact with the data. Some examples include: • Communication constraints in distributed learning: There has been much recent work on learning when the training data is distributed among several machines. Since the machines may work in parallel, this potentially allows significant computational speed-ups and the ability to cope with large datasets. On the flip side, communication rates between machines is typically much slower than their processing speeds, and a major challenge is to perform these learning tasks with minimal communication. • Memory constraints: The standard implementation of many common learning tasks requires memory which is super-linear in the data dimension. For example, principal component analysis (PCA) requires us to estimate eigenvectors of the data covariance matrix, whose size is quadratic in the data dimension and can be prohibitive for high-dimensional data. Another example is kernel learning, which requires manipulation of the Gram matrix, whose size is quadratic in the number of data points. There has been considerable effort in developing and analyzing algorithms for such problems with reduced memory footprint (e.g. [20, 7, 27, 24]). • Online learning constraints: The need for fast and scalable learning algorithms has popularised the use of online algorithms, which work by sequentially going over the training data, and incrementally updating a (usually small) state vector. Well-known special cases include gradient descent and mirror descent algorithms. The requirement of sequentially passing over the data can be seen as a type of information constraint, whereas the small state these algorithms often maintain can be seen as another type of memory constraint. 1 • Partial-information constraints: A common situation in machine learning is when the available data is corrupted, sanitized (e.g. due to privacy constraints), has missing features, or is otherwise partially accessible. There has also been considerable interest in online learning with partial information, where the learner only gets partial feedback on his performance. This has been used to model various problems in web advertising, routing and multiclass learning. Perhaps the most well-known case is the multi-armed bandits problem with many other variants being developed, such as contextual bandits, combinatorial bandits, and more general models such as partial monitoring [10, 11]. Although these examples come from very different domains, they all share the common feature of information constraints on how the learning algorithm can interact with the training data. In some specific cases (most notably, multi-armed bandits, and also in the context of certain distributed protocols, e.g. [6, 29]) we can even formalize the price we pay for these constraints, in terms of degraded sample complexity or regret guarantees. However, we currently lack a general informationtheoretic framework, which directly quantifies how such constraints can impact performance. For example, are there cases where any online algorithm, which goes over the data one-by-one, must have a worse sample complexity than (say) empirical risk minimization? Are there situations where a small memory footprint provably degrades the learning performance? Can one quantify how a constraint of getting only a few bits from each example affects our ability to learn? In this paper, we make a first step in developing such a framework. We consider a general class of learning processes, characterized only by information-theoretic constraints on how they may interact with the data (and independent of any specific problem semantics). As special cases, these include online algorithms with memory constraints, certain types of distributed algorithms, as well as online learning with partial information. We identify cases where any such algorithm must perform worse than what can be attained without such information constraints. The tools developed allows us to establish several results for specific learning problems: • We prove a new and generic regret lower bound for partial-information online learning with expert advice, of the form Ω( p (d/b)T), where T is the number of rounds, d is the dimension of the loss/reward vector, and b is the number of bits b extracted from each loss vector. It is optimal up to log-factors (without further assumptions), and holds no matter what these b bits are – a single coordinate (as in multi-armed bandits), some information on several coordinates (as in semi-bandit feedback), a linear projection (as in bandit linear optimization), some feedback signal from a restricted set (as in partial monitoring) etc. Interestingly, it holds even if the online learner is allowed to adaptively choose which bits of the loss vector it can retain at each round. The lower bound quantifies directly how information constraints in online learning degrade the attainable regret, independent of the problem semantics. • We prove that for some learning and estimation problems - in particular, sparse PCA and sparse covariance estimation in Rd - no online algorithm can attain statistically optimal performance (in terms of sample complexity) with less than ˜Ω(d2) memory. To the best of our knowledge, this is the first formal example of a memory/sample complexity trade-off in a statistical learning setting. • We show that for similar types of problems, there are cases where no distributed algorithm (which is based on a non-interactive or serial protocol on i.i.d. data) can attain optimal performance with less than ˜Ω(d2) communication per machine. To the best of our knowledge, this is the first formal example of a communication/sample complexity trade-off, in the regime where the communication budget is larger than the data dimension, and the examples at each machine come from the same underlying distribution. • We demonstrate the existence of (synthetic) stochastic optimization problems where any algorithm which uses memory linear in the dimension (e.g. stochastic gradient descent or mirror descent) cannot be statistically optimal. Related Work In stochastic optimization, there has been much work on lower bounds for sequential algorithms (e.g. [22, 1, 23]). However, these results all hold in an oracle model, where data is assumed to be made available in a specific form (such as a stochastic gradient estimate). As already pointed out in 2 [22], this does not directly translate to the more common setting, where we are given a dataset and wish to run a simple sequential optimization procedure. In the context of distributed learning and statistical estimation, information-theoretic lower bounds were recently shown in the pioneering work [29], which identifies cases where communication constraints affect statistical performance. These results differ from ours (in the context of distributed learning) in two important ways. First, they pertain to parametric estimation in Rd, where the communication budget per machine is much smaller than what is needed to even specify the answer with constant accuracy (O(d) bits). In contrast, our results pertain to simpler detection problems, where the answer requires only O(log(d)) bits, yet lead to non-trivial lower bounds even when the budget size is much larger (in some cases, much larger than d). The second difference is that their work focuses on distributed algorithms, while we address a more general class of algorithms, which includes other information-constrained settings. Strong lower bounds in the context of distributed learning have also been shown in [6], but they do not apply to a regime where examples across machines come from the same distribution, and where the communication budget is much larger than what is needed to specify the output. There are well-known lower bounds for multi-armed bandit problems and other online learning with partial-information settings. However, they crucially depend on the semantics of the information feedback considered. For example, the standard multi-armed bandit lower bound [5] pertain to a setting where we can view a single coordinate of the loss vector, but doesn’t apply as-is when we can view more than one coordinate (e.g. [4, 25]), get side-information (e.g. [19]), receive a linear or non-linear projection (as in bandit linear and convex optimization), or receive a different type of partial feedback (e.g. partial monitoring [11]). In contrast, our results are generic and can directly apply to any such setting. Memory and communication constraints have been extensively studied within theoretical computer science (e.g. [3, 21]). Unfortunately, almost all these results pertain to data which was either adversarially generated, ordered (in streaming algorithms) or split (in distributed algorithms), and do not apply to statistical learning tasks, where the data is drawn i.i.d. from an underlying distribution. [28, 15] do consider i.i.d. data, but focus on problems such as detecting graph connectivity and counting distinct elements, and not learning problems such as those considered here. Also, there are works on provably memory-efficient algorithms for statistical problems (e.g. [20, 7, 17, 13]), but these do not consider lower bounds or provable trade-offs. Finally, there has been a line of works on hypothesis testing and statistical estimation with finite memory (see [18] and references therein). However, the limitations shown in these works apply when the required precision exceeds the amount of memory available. Due to finite sample effects, this regime is usually relevant only when the data size is exponential in the memory size. In contrast, we do not rely on finite precision considerations. 2 Information-Constrained Protocols We begin with a few words about notation. We use bold-face letters (e.g. x) to denote vectors, and let ej ∈Rd denote j-th standard basis vector. When convenient, we use the standard asymptotic notation O(·), Ω(·), Θ(·) to hide constants, and an additional˜sign (e.g. ˜O(·)) to also hide logfactors. log(·) refers to the natural logarithm, and log2(·) to the base-2 logarithm. Our main object of study is the following generic class of information-constrained algorithms: Definition 1 ((b, n, m) Protocol). Given access to a sequence of mn i.i.d. instances (vectors in Rd), an algorithm is a (b, n, m) protocol if it has the following form, for some functions ft returning an output of at most b bits, and some function f: • For t = 1, . . . , m – Let Xt be a batch of n i.i.d. instances – Compute message W t = ft(Xt, W 1, W 2, . . . W t−1) • Return W = f(W 1, . . . , W m) 3 Note that the functions {ft}m t=1, f are completely arbitrary, may depend on m and can also be randomized. The crucial assumption is that the outputs W t are constrained to be only b bits. As the definition above may appear quite abstract, let us consider a few specific examples: • b-memory online protocols: Consider any algorithm which goes over examples one-by-one, and incrementally updates a state vector W t of bounded size b. We note that a majority of online learning and stochastic optimization algorithms have bounded memory. For example, for linear predictors, most gradient-based algorithms maintain a state whose size is proportional to the size of the parameter vector that is being optimized. Such algorithms correspond to (b, n, m) protocols where W t is the state vector after round t, with an update function ft depending only on W t−1, and f depends only on W m. n = 1 corresponds to algorithms which use one example at a time, whereas n > 1 corresponds to algorithms using mini-batches. • Non-interactive and serial distributed algorithms: There are m machines and each machine receives an independent sample Xt of size n. It then sends a message W t = ft(Xt) (which here depends only on Xt). A centralized server then combines the messages to compute an output f(W 1 . . . W m). This includes for instance divide-and-conquer style algorithms proposed for distributed stochastic optimization (e.g. [30]). A serial variant of the above is when there are m machines, and one-by-one, each machine t broadcasts some information W t to the other machines, which depends on Xt as well as previous messages sent by machines 1, 2, . . . , (t −1). • Online learning with partial information: Suppose we sequentially receive d-dimensional loss vectors, and from each of these we can extract and use only b bits of information, where b ≪d. This includes most types of bandit problems [10]. In our work, we contrast the performance attainable by any algorithm corresponding to such a protocol, to constraint-free protocols which are allowed to interact with the data in any manner. 3 Basic Results Our results are based on a simple ‘hide-and-seek’ statistical estimation problem, for which we show a strong gap between the performance of information-constrained protocols and constraint-free protocols. It is parameterized by a dimension d, bias ρ, and sample size mn, and defined as follows: Definition 2 (Hide-and-seek Problem). Consider the set of product distributions {Prj(·)}d j=1 over {−1, 1}d defined via Ex∼Prj(·)[xi] = 2ρ 1i=j for all coordinates i = 1, . . . d. Given an i.i.d. sample of mn instances generated from Prj(·), where j is unknown, detect j. In words, Prj(·) corresponds to picking all coordinates other than j to be ±1 uniformly at random, and independently picking coordinate j to be +1 with a higher probability 1 2 + ρ . The goal is to detect the biased coordinate j based on a sample. First, we note that without information constraints, it is easy to detect the biased coordinate with O(log(d)/ρ2) instances. This is formalized in the following theorem, which is an immediate consequence of Hoeffding’s inequality and a union bound: Theorem 1. Consider the hide-and-seek problem defined earlier. Given mn samples, if ˜J is the coordinate with the highest empirical average, then Prj( ˜J = j) ≥1 −2d exp −1 2mnρ2 . We now show that for this hide-and-seek problem, there is a large regime where detecting j is information-theoretically possible (by Thm. 1), but any information-constrained protocol will fail to do so with high probability. We first show this for (b, 1, m) protocols (i.e. protocols which process one instance at a time, such as bounded-memory online algorithms, and distributed algorithms where each machine holds a single instance): Theorem 2. Consider the hide-and-seek problem on d > 1 coordinates, with some bias ρ ≤1/4 and sample size m. Then for any estimate ˜J of the biased coordinate returned by any (b, 1, m) protocol, there exists some coordinate j such that Prj( ˜J = j) ≤3 d + 21 r mρ2b d . 4 The theorem implies that any algorithm corresponding to (b, 1, m) protocols requires sample size m ≥Ω((d/b)/ρ2) to reliably detect some j. When b is polynomially smaller than d (e.g. a constant), we get an exponential gap compared to constraint-free protocols, which only require O(log(d)/ρ2) instances. Moreover, Thm. 2 is tight up to log-factors: Consider a b-memory online algorithm, which splits the d coordinates into O(d/b) segments of O(b) coordinates each, and sequentially goes over the segments, each time using ˜O(1/ρ2) independent instances to determine if one of the coordinates in each segment is biased by ρ (assuming ρ is not exponentially smaller than b, this can be done with O(b) memory by maintaining the empirical average of each coordinate). This will allow to detect the biased coordinate, using ˜O((d/b)/ρ2) instances. We now turn to provide an analogous result for general (b, n, m) protocols (where n is possibly greater than 1). However, it is a bit weaker in terms of the dependence on the bias parameter1: Theorem 3. Consider the hide-and-seek problem on d > 1 coordinates, with some bias ρ ≤1/4n and sample size mn. Then for any estimate ˜J of the biased coordinate returned by any (b, n, m) protocol, there exists some coordinate j such that Prj( ˜J = j) ≤3 d + 5 s mn min 10ρb d , ρ2 . The theorem implies that any (b, n, m) protocol will require a sample size mn which is at least Ω max n (d/b) ρ , 1 ρ2 o in order to detect the biased coordinate. This is larger than the O(log(d)/ρ2) instances required by constraint-free protocols whenever ρ > b log(d)/d, and establishes trade-offs between sample complexity and information complexities such as memory and communication. Due to lack of space, all our proofs appear in the supplementary material. However, the technical details may obfuscate the high-level intuition, which we now turn to explain. From an information-theoretic viewpoint, our results are based on analyzing the mutual information between j and W t in a graphical model as illustrated in figure 1. In this model, the unknown message j (i.e. the identity of the biased coordinate) is correlated with one of d independent binary-valued random vectors (one for each coordinate across the data instances Xt). All these random vectors are noisy, and the mutual information in bits between Xt j and j can be shown to be on the order of nρ2. Without information constraints, it follows that given m instantiations of Xt, the total amount of information conveyed on j by the data is Θ(mnρ2), and if this quantity is larger than log(d), then there is enough information to uniquely identify j. Note that no stronger bound can be established with standard statistical lower-bound techniques, since these do not consider information constraints internal to the algorithm used. Indeed, in our information-constrained setting there is an added complication, since the output W t can only contain b bits. If b ≪d, then W t cannot convey all the information on Xt 1, . . . , Xt d. Moreover, it will likely convey only little information if it doesn’t already “know” j. For example, W t may provide a little bit of information on all d coordinates, but then the amount of information conveyed on each (and in particular, the random variable Xt j which is correlated with j) will be very small. Alternatively, W t may provide accurate information on O(b) coordinates, but since the relevant coordinate j is not known, it is likely to “miss” it. The proof therefore relies on the following components: • No matter what, a (b, n, m) protocol cannot provide more than b/d bits of information (in expectation) on Xt j, unless it already “knows” j. • Even if the mutual information between W t and Xt j is only b/d, and the mutual information between Xt j and j is nρ2, standard information-theoretic tools such as the data processing inequality only implies that the mutual information between W t and j is bounded by min{nρ2, b/d}. We essentially prove a stronger information contraction bound, which is the product of the two terms 1The proof of Thm. 2 also applies in the case n > 1, but the dependence on n is exponential - see the proof for details. 5 𝑋1 𝑡 𝑋2 𝑡 𝑋𝑗 𝑡 𝑋𝑑 𝑡 𝑊𝑡 𝑗 ⋮ ⋮ Figure 1: Illustration of the relationship between j, the coordinates 1, 2, . . . , j, . . . , d of the sample Xt, and the message W t. The coordinates are independent of each other, and most of them just output ±1 uniformly at random. Only Xt j has a slightly different distribution and hence contains some information on j. O(ρ2b/d) when n = 1, and O(nρb/d) for general n. At a technical level, this is achieved by considering the relative entropy between the distributions of W t with and without a biased coordinate j, relating it to the χ2-divergence between these distributions (using relatively recent analytic results on Csisz´ar f-divergences [16], [26]), and performing algebraic manipulations to upper bound it by ρ2 times the mutual information between W t and Xt j, which is on average b/d as discussed earlier. This eventually leads to the mρ2b/d term in Thm. 2, as well as Thm. 3 using somewhat different calculations. 4 Applications 4.1 Online Learning with Partial Information Consider the setting of learning with expert advice, defined as a game over T rounds, where each round t a loss vector ℓt ∈[0, 1]d is chosen, and the learner (without knowing ℓt) needs to pick an action it from a fixed set {1, . . . , d}, after which the learner suffers loss ℓt,it. The goal of the learner is to minimize the regret with respect to any fixed action i, PT t=1 ℓt,it −PT t=1 ℓt,i. We are interested in variants where the learner only gets some partial information on ℓt. For example, in multi-armed bandits, the learner can only view ℓt,it. The following theorem is a simple corollary of Thm. 2: Theorem 4. Suppose d > 3. For any (b, 1, T) protocol, there is an i.i.d. distribution over loss vectors ℓt ∈[0, 1]d for which minj E hPT t=1 ℓt,jt −PT t=1 ℓt,j i ≥c min n T, p (d/b)/T o , where c > 0 is a numerical constant. As a result, we get that for any algorithm with any partial information feedback model (where b bits are extracted from each d-dimensional loss vector), it is impossible to get regret lower than Ω( p (d/b)T) for sufficiently large T. Without further assumptions on the feedback model, the bound is optimal up to log-factors, as shown by O( p (d/b)T) upper bounds for linear or coordinate measurements (where b is the number of measurements or coordinates seen2) [2, 19, 25]. However, the lower bound extends beyond these specific settings, and include cases such as arbitrary nonlinear measurements of the loss vector, or receiving feedback signals of bounded size (although some setting-specific lower bounds may be stronger). It also simplifies previous lower bounds, tailored to specific types of partial information feedback, or relying on careful reductions to multiarmed bandits (e.g. [12, 25]). Interestingly, the bound holds even if the algorithm is allowed to examine each loss vector ℓt and adaptively choose which b bits of information it wishes to retain. 4.2 Stochastic Optimization We now turn to consider an example from stochastic optimization, where our goal is to approximately minimize F(h) = EZ[f(h; Z)] given access to m i.i.d. instantiations of Z, whose distribution is unknown. This setting has received much attention in recent years, and can be used to model many statistical learning problems. In this section, we demonstrate a stochastic optimization problem where information-constrained protocols provably pay a performance price compared to non-constrained algorithms. We emphasize that it is a simple toy problem, and not meant to represent anything realistic. We present it for two reasons: First, it illustrates another type of situation 2Strictly speaking, if the losses are continuous-valued, these require arbitrary-precision measurements, but in any practical implementation we can assume the losses and measurements are discrete. 6 where information-constrained protocols may fail (in particular, problems involving matrices). Second, the intuition of the construction is also used in the more realistic problem of sparse PCA and covariance estimation, considered in the next section. Specifically, suppose we wish to solve min(w,v) F(w, v) = EZ[f((w, v); Z)], where f((w, v); Z) = w⊤Zv , Z ∈[−1, +1]d×d and w, v range over all vectors in the simplex (i.e. wi, vi ≥0 and Pd i=1 wi = Pd i=1 vi = 1). A minimizer of F(w, v) is (ei∗, ej∗), where (i∗, j∗) are indices of the matrix entry with minimal mean. Moreover, by a standard concentration of measure argument, given m i.i.d. instantiations Z1, . . . , Zm from any distribution over Z, then the solution (e˜I, e ˜ J), where (˜I, ˜J) = arg mini,j 1 m Pm t=1 Zt i,j are the indices of the entry with empirically smallest mean, satisfies F(e˜I, e ˜ J) ≤minw,v F(w, v) + O p log(d)/m with high probability. However, computing (˜I, ˜J) as above requires us to track d2 empirical means, which may be expensive when d is large. If instead we constrain ourselves to (b, 1, m) protocols where b = O(d) (e.g. any sort of stochastic gradient method optimization algorithm, whose memory is linear in the number of parameters), then we claim that we have a lower bound of Ω(min{1, p d/m}) on the expected error, which is much higher than the O( p log(d)/m) upper bound for constraint-free protocols. This claim is a straightforward consequence of Thm. 2: We consider distributions where Z ∈{−1, +1}d×d with probability 1, each of the d2 entries is chosen independently, and E[Z] is zero except some coordinate (i∗, j∗) where it equals O( p d/m). For such distributions, getting optimization error smaller than O( p d/m) reduces to detecting (i∗, j∗), and this in turn reduces to the hide-and-seek problem defined earlier, over d2 coordinates and a bias ρ = O( p d/m). However, Thm. 2 shows that no (b, 1, m) protocol (where b = O(d)) will succeed if mdρ2 ≪d2, which indeed happens if ρ is small enough. Similar kind of gaps can be shown using Thm. 3 for general (b, n, m) protocols, which apply to any special case such as non-interactive distributed learning. 4.3 Sparse PCA, Sparse Covariance Estimation, and Detecting Correlations The sparse PCA problem ([31]) is a standard and well-known statistical estimation problem, defined as follows: We are given an i.i.d. sample of vectors x ∈Rd, and we assume that there is some direction, corresponding to some sparse vector v (of cardinality at most k), such that the variance E[(v⊤x)2] along that direction is larger than at any other direction. Our goal is to find that direction. We will focus here on the simplest possible form of this problem, where the maximizing direction v is assumed to be 2-sparse, i.e. there are only 2 non-zero coordinates vi, vj. In that case, E[(v⊤x)2] = v2 1E[x2 1] + v2 2E[x2 2] + 2v1v2E[xixj]. Following previous work (e.g. [8]), we even assume that E[x2 i ] = 1 for all i, in which case the sparse PCA problem reduces to detecting a coordinate pair (i∗, j∗), i∗< j∗for which xi∗, xj∗are maximally correlated. A special case is a simple and natural sparse covariance estimation problem [9], where we assume that all covariates are uncorrelated (E[xixj] = 0) except for a unique correlated pair (i∗, j∗) which we need to detect. This setting bears a resemblance to the example seen in the context of stochastic optimization in section 4.2: We have a d × d stochastic matrix xx⊤, and we need to detect an off-diagonal biased entry at location (i∗, j∗). Unfortunately, these stochastic matrices are rank-1, and do not have independent entries as in the example considered in section 4.2. Instead, we use a more delicate construction, relying on distributions supported on sparse vectors. The intuition is that then each instantiation of xx⊤is sparse, and the situation can be reduced to a variant of our hide-and-seek problem where only a few coordinates are non-zero at a time. The theorem below establishes performance gaps between constraint-free protocols (in particular, a simple plug-in estimator), and any (b, n, m) protocol for a specific choice of n, or any b-memory online protocol (See Sec. 2). Theorem 5. Consider the class of 2-sparse PCA (or covariance estimation) problems in d ≥9 dimensions as described above, and all distributions such that E[x2 i ] = 1 for all i, and: 1. For a unique pair of distinct coordinates (i∗, j∗), it holds that E[xi∗xj∗] = τ > 0, whereas E[xixj] = 0 for all distinct coordinate pairs (i, j) ̸= (i∗, j∗). 7 2. For any i < j, if g xixj is the empirical average of xixj over m i.i.d. instances, then Pr | g xixj −E[xixj]| ≥τ 2 ≤2 exp −mτ 2/6 . Then the following holds: • Let (˜I, ˜J) = arg maxi<j g xixj. Then for any distribution as above, Pr((˜I, ˜J) = (i∗, j∗)) ≥ 1 −d2 exp(−mτ 2/6). In particular, when the bias τ equals Θ(1/d log(d)), Pr((˜I, ˜J) = (i∗, j∗)) ≥1 −d2 exp −Ω m d2 log2(d) . • For any estimate (˜I, ˜J) of (i∗, j∗) returned by any b-memory online protocol using m instances, or any (b, d(d −1), ⌊ m d(d−1)⌋) protocol, there exists a distribution with bias τ = Θ(1/d log(d)) as above such that Pr (˜I, ˜J) = (i∗, j∗) ≤O 1 d2 + r m d4/b . The theorem implies that in the regime where b ≪d2/ log2(d), we can choose any m such that d4 b ≫m ≫d2 log2(d), and get that the chances of the protocol detecting (i∗, j∗) are arbitrarily small, even though the empirical average reveals (i∗, j∗) with arbitrarily high probability. Thus, in this sparse PCA / covariance estimation setting, any online algorithm with sub-quadratic memory cannot be statistically optimal for all sample sizes. The same holds for any (b, n, m) protocol in an appropriate regime of (n, m), such as distributed algorithms as discussed earlier. To the best of our knowledge, this is the first result which explicitly shows that memory constraints can incur a statistical cost for a standard estimation problem. It is interesting that sparse PCA was also shown recently to be affected by computational constraints on the algorithm’s runtime ([8]). The proof appears in the supplementary material. Besides using a somewhat different hide-andseek construction as mentioned earlier, it also relies on the simple but powerful observation that any b-memory online protocol is also a (b, κ, ⌊m/κ⌋) protocol for arbitrary κ. Therefore, we only need to prove the theorem for (b, κ, ⌊m/κ⌋) for some κ (chosen to equal d(d −1) in our case) to automatically get the same result for b-memory protocols. 5 Discussion and Open Questions In this paper, we investigated cases where a generic type of information-constrained algorithm has strictly inferior statistical performance compared to constraint-free algorithms. As special cases, we demonstrated such gaps for memory-constrained and communication-constrained algorithms (e.g. in the context of sparse PCA and covariance estimation), as well as online learning with partial information and stochastic optimization. These results are based on explicitly considering the information-theoretic structure of the problem, and depend only on the number of bits extracted from each data batch. Several questions remain open. One question is whether Thm. 3 can be improved. We conjecture this is true, and that the bound should actually depend on mnρ2b/d rather than mn min{ρb/d, ρ2}. This would allow, for instance, to show the same type of performance gaps for (b, 1, m) protocols and (b, n, m) protocols. A second open question is whether there are convex stochastic optimization problems, for which online or distributed algorithms are provably inferior to constraint-free algorithms (the example discussed in section 4.2 refers to an easily-solvable yet non-convex problem). A third open question is whether our results for distributed algorithms can be extended to more interactive protocols, where the different machines can communicate over several rounds. There is a rich literature on the subject within theoretical computer science, but it is not clear how to ‘import’ these results to a statistical setting based on i.i.d. data. A fourth open question is whether the performance gap that we demonstrated for sparse-PCA / covariance estimation can be extended to a ‘natural’ distribution (e.g. Gaussian), as our result uses a tailored distribution, which has a sufficiently controlled tail behavior but is ‘spiky’ and not sub-Gaussian uniformly in the dimension. More generally, it would be interesting to extend the results to other learning problems and information constraints. Acknowledgements: This research is supported by the Intel ICRI-CI Institute, Israel Science Foundation grant 425/13, and an FP7 Marie Curie CIG grant. We thank John Duchi, Yevgeny Seldin and Yuchen Zhang for helpful comments. 8 References [1] A. Agarwal, P. Bartlett, P. Ravikumar, and M. Wainwright. Information-theoretic lower bounds on the oracle complexity of stochastic convex optimization. Information Theory, IEEE Transactions on, 58(5):3235–3249, 2012. [2] A. Agarwal, O. Dekel, and L. Xiao. Optimal algorithms for online convex optimization with multi-point bandit feedback. In COLT, 2010. [3] Noga Alon, Yossi Matias, and Mario Szegedy. The space complexity of approximating the frequency moments. In STOC, 1996. [4] J.-Y. Audibert, S. Bubeck, and G. Lugosi. Minimax policies for combinatorial prediction games. In COLT, 2011. [5] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. Schapire. The nonstochastic multiarmed bandit problem. SIAM Journal on Computing, 32(1):48–77, 2002. [6] M. Balcan, A. Blum, S. Fine, and Y. Mansour. Distributed learning, communication complexity and privacy. In COLT, 2012. [7] A. Balsubramani, S. Dasgupta, and Y. Freund. The fast convergence of incremental pca. In NIPS, 2013. [8] A. Berthet and P. Rigollet. Complexity theoretic lower bounds for sparse principal component detection. In COLT, 2013. [9] J. Bien and R. Tibshirani. Sparse estimation of a covariance matrix. Biometrika, 98(4):807–820, 2011. [10] S. Bubeck and N. Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends in Machine Learning, 5(1):1–122, 2012. [11] N. Cesa-Bianchi and L. Gabor. Prediction, learning, and games. Cambridge University Press, 2006. [12] N. Cesa-Bianchi, S. Shalev-Shwartz, and O. Shamir. Efficient learning with partially observed attributes. The Journal of Machine Learning Research, 12:2857–2878, 2011. [13] S. Chien, K. Ligett, and A. McGregor. Space-efficient estimation of robust statistics and distribution testing. In ICS, 2010. [14] T. Cover and J. Thomas. Elements of information theory. John Wiley & Sons, 2006. [15] M. Crouch, A. McGregor, and D. Woodruff. Stochastic streams: Sample complexity vs. space complexity. In MASSIVE, 2013. [16] S. S. Dragomir. Upper and lower bounds for Csisz´ar’s f-divergence in terms of the Kullback-Leibler distance and applications. In Inequalities for Csisz´ar f-Divergence in Information Theory. RGMIA Monographs, 2000. [17] S. Guha and A. McGregor. Space-efficient sampling. In AISTATS, 2007. [18] L. Kontorovich. Statistical estimation with bounded memory. Statistics and Computing, 22(5):1155– 1164, 2012. [19] S. Mannor and O. Shamir. From bandits to experts: On the value of side-observations. In NIPS, 2011. [20] I. Mitliagkas, C. Caramanis, and P. Jain. Memory limited, streaming pca. In NIPS, 2013. [21] S. Muthukrishnan. Data streams: Algorithms and applications. Now Publishers Inc, 2005. [22] A. Nemirovsky and D. Yudin. Problem Complexity and Method Efficiency in Optimization. WileyInterscience, 1983. [23] M. Raginsky and A. Rakhlin. Information-based complexity, feedback and dynamics in convex programming. Information Theory, IEEE Transactions on, 57(10):7036–7056, 2011. [24] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In NIPS, 2007. [25] Y. Seldin, P. Bartlett, K. Crammer, and Y. Abbasi-Yadkori. Prediction with limited advice and multiarmed bandits with paid observations. In ICML, 2014. [26] I. Taneja and P. Kumar. Relative information of type s, Csisz´ar’s f-divergence, and information inequalities. Inf. Sci., 166(1-4):105–125, 2004. [27] C. Williams and M. Seeger. Using the nystr¨om method to speed up kernel machines. In NIPS, 2001. [28] D. Woodruff. The average-case complexity of counting distinct elements. In ICDT, 2009. [29] Y. Zhang, J. Duchi, M. Jordan, and M. Wainwright. Information-theoretic lower bounds for distributed statistical estimation with communication constraints. In NIPS, 2013. [30] Y. Zhang, J. Duchi, and M. Wainwright. Communication-efficient algorithms for statistical optimization. In NIPS, 2012. [31] H. Zou, T. Hastie, and R. Tibshirani. Sparse principal component analysis. Journal of computational and graphical statistics, 15(2):265–286, 2006. 9
|
2014
|
149
|
5,235
|
LSDA: Large Scale Detection through Adaptation Judy Hoffman⋄, Sergio Guadarrama⋄, Eric Tzeng⋄, Ronghang Hu∇, Jeff Donahue⋄, ⋄EECS, UC Berkeley, ∇EE, Tsinghua University {jhoffman, sguada, tzeng, jdonahue}@eecs.berkeley.edu hrh11@mails.tsinghua.edu.cn Ross Girshick⋄, Trevor Darrell⋄, Kate Saenko△ ⋄EECS, UC Berkeley, △CS, UMass Lowell {rbg, trevor}@eecs.berkeley.edu, saenko@cs.uml.edu Abstract A major challenge in scaling object detection is the difficulty of obtaining labeled images for large numbers of categories. Recently, deep convolutional neural networks (CNNs) have emerged as clear winners on object classification benchmarks, in part due to training with 1.2M+ labeled classification images. Unfortunately, only a small fraction of those labels are available for the detection task. It is much cheaper and easier to collect large quantities of image-level labels from search engines than it is to collect detection data and label it with precise bounding boxes. In this paper, we propose Large Scale Detection through Adaptation (LSDA), an algorithm which learns the difference between the two tasks and transfers this knowledge to classifiers for categories without bounding box annotated data, turning them into detectors. Our method has the potential to enable detection for the tens of thousands of categories that lack bounding box annotations, yet have plenty of classification data. Evaluation on the ImageNet LSVRC-2013 detection challenge demonstrates the efficacy of our approach. This algorithm enables us to produce a >7.6K detector by using available classification data from leaf nodes in the ImageNet tree. We additionally demonstrate how to modify our architecture to produce a fast detector (running at 2fps for the 7.6K detector). Models and software are available at lsda.berkeleyvision.org. 1 Introduction Both classification and detection are key visual recognition challenges, though historically very different architectures have been deployed for each. Recently, the R-CNN model [1] showed how to adapt an ImageNet classifier into a detector, but required bounding box data for all categories. We ask, is there something generic in the transformation from classification to detection that can be learned on a subset of categories and then transferred to other classifiers? One of the fundamental challenges in training object detection systems is the need to collect a large of amount of images with bounding box annotations. The introduction of detection challenge datasets, such as PASCAL VOC [2], have propelled progress by providing the research community a dataset with enough fully annotated images to train competitive models although only for 20 classes. Even though the more recent ImageNet detection challenge dataset [3] has extended the set of annotated images, it only contains data for 200 categories. As we look forward towards the goal of scaling our systems to human-level category detection, it becomes impractical to collect a large quantity of bounding box labels for tens or hundreds of thousands of categories. ∗This work was supported in part by DARPA’s MSEE and SMISC programs, by NSF awards IIS-1427425, and IIS-1212798, IIS-1116411, and by support from Toyota. 1 I CLASSIFY dog apple I DET dog apple I CLASSIFY cat W CLASSIFY dog W CLASSIFY apple Classifiers W DET dog W DET apple Detectors W CLASSIFY cat W DET cat I DET ? Figure 1: The core idea is that we can learn detectors (weights) from labeled classification data (left), for a wide range of classes. For some of these classes (top) we also have detection labels (right), and can learn detectors. But what can we do about the classes with classification data but no detection data (bottom)? Can we learn something from the paired relationships for the classes for which we have both classifiers and detectors, and transfer that to the classifier at the bottom to make it into a detector? In contrast, image-level annotation is comparatively easy to acquire. The prevalence of image tags allows search engines to quickly produce a set of images that have some correspondence to any particular category. ImageNet [3], for example, has made use of these search results in combination with manual outlier detection to produce a large classification dataset comprised of over 20,000 categories. While this data can be effectively used to train object classifier models, it lacks the supervised annotations needed to train state-of-the-art detectors. In this work, we propose Large Scale Detection through Adaptation (LSDA), an algorithm that learns to transform an image classifier into an object detector. To accomplish this goal, we use supervised convolutional neural networks (CNNs), which have recently been shown to perform well both for image classification [4] and object detection [1, 5]. We cast the task as a domain adaptation problem, considering the data used to train classifiers (images with category labels) as our source domain, and the data used to train detectors (images with bounding boxes and category labels) as our target domain. We then seek to find a general transformation from the source domain to the target domain, that can be applied to any image classifier to adapt it into a object detector (see Figure 1). Girshick et al. (R-CNN) [1] demonstrated that adaptation, in the form of fine-tuning, is very important for transferring deep features from classification to detection and partially inspired our approach. However, the R-CNN algorithm uses classification data only to pre-train a deep network and then requires a large number of bounding boxes to train each detection category. Our LSDA algorithm uses image classification data to train strong classifiers and requires detection bounding box labeled data for only a small subset of the final detection categories and much less time. It uses the classes labeled with both classification and detection labels to learn a transformation of the classification network into a detection network. It then applies this transformation to adapt classifiers for categories without any bounding box annotated data into detectors. Our experiments on the ImageNet detection task show significant improvement (+50% relative mAP) over a baseline of just using raw classifier weights on object proposal regions. One can adapt any ImageNet-trained classifier into a detector using our approach, whether or not there are corresponding detection labels for that class. 2 Related Work Recently, Multiple Instance Learning (MIL) has been used for training detectors using weak labels, i.e. images with category labels but not bounding box labels. The MIL paradigm estimates latent labels of examples in positive training bags, where each positive bag is known to contain at least one positive example. Ali et al. [6] constructs positive bags from all object proposal regions in a weakly labeled image that is known to contain the object, and uses a version of MIL to learn an object detector. A similar method [7] learns detectors from PASCAL VOC images without bounding box 2 background:"0.25" det" layers"175" det" fc6" det" fc7" Input"image" Region" Proposals" Warped"" region" LSDA"Net" cat:"0.90" " fcA" cat?"yes" dog:"0.45" fcB" dog?"no" Produce"" Predic=ons" background" δB" adapt" Figure 2: Detection with the LSDA network. Given an image, extract region proposals, reshape the regions to fit into the network size and finally produce detection scores per category for the region. Layers with red dots/fill indicate they have been modified/learned during fine-tuning with available bounding box annotated data. labels. MIL-based methods are a promising approach that is complimentary to ours. They have not yet been evaluated on the large-scale ImageNet detection challenge to allow for direct comparison. Deep convolutional neural networks (CNNs) have emerged as state of the art on popular object classification benchmarks (ILSVRC, MNIST) [4]. In fact, “deep features” extracted from CNNs trained on the object classification task are also state of the art on other tasks, e.g., subcategory classification, scene classification, domain adaptation [8] and even image matching [9]. Unlike the previously dominant features (SIFT [10], HOG [11]), deep CNN features can be learned for each specific task, but only if sufficient labeled training data are available. R-CNN [1] showed that finetuning deep features on a large amount of bounding box labeled data significantly improves detection performance. Domain adaptation methods aim to reduce dataset bias caused by a difference in the statistical distributions between training and test domains. In this paper, we treat the transformation of classifiers into detectors as a domain adaptation task. Many approaches have been proposed for classifier adaptation; e.g., feature space transformations [12], model adaptation approaches [13, 14] and joint feature and model adaptation [15, 16]. However, even the joint learning models are not able to modify the feature extraction process and so are limited to shallow adaptation techniques. Additionally, these methods only adapt between visual domains, keeping the task fixed, while we adapt both from a large visual domain to a smaller visual domain and from a classification task to a detection task. Several supervised domain adaptation models have been proposed for object detection. Given a detector trained on a source domain, they adjust its parameters on labeled target domain data. These include variants for linear support vector machines [17, 18, 19], as well as adaptive latent SVMs [20] and adaptive exemplar SVM [21]. A related recent method [22] proposes a fast adaptation technique based on Linear Discriminant Analysis. These methods require labeled detection data for all object categories, both in the source and target domains, which is absent in our scenario. To our knowledge, ours is the first method to adapt to held-out categories that have no detection data. 3 Large Scale Detection through Adaptation (LSDA) We propose Large Scale Detection through Adaptation (LSDA), an algorithm for adapting classifiers to detectors. With our algorithm, we are able to produce a detection network for all categories of interest, whether or not bounding boxes are available at training time (see Figure 2). Suppose we have K categories we want to detect, but we only have bounding box annotations for m categories. We will refer to the set of categories with bounding box annotations as B = {1, ...m}, and the set of categories without bounding box annotations as set A = {m, ..., K}. In practice, we will likely have m ≪K, as is the case in the ImageNet dataset. We assume availability of classification data (image-level labels) for all K categories and will use that data to initialize our network. 3 LSDA transforms image classifiers into object detectors using three key insights: 1. Recognizing background is an important step in adapting a classifier into a detector 2. Category invariant information can be transferred between the classifier and detector feature representations 3. There may be category specific differences between a classifier and a detector We will next demonstrate how our method accomplishes each of these insights as we describe the training of LSDA. 3.1 Training LSDA: Category Invariant Adaptation For our convolutional neural network, we adopt the architecture of Krizhevsky et al. [4], which achieved state-of-the-art performance on the ImageNet ILSVRC2012 classification challenge. Since this network requires a large amount of data and time to train its approximately 60 million parameters, we start by pre-training the CNN trained on the ILSVRC2012 classification dataset, which contains 1.2 million classification-labeled images of 1000 categories. Pre-training on this dataset has been shown to be a very effective technique [8, 5, 1], both in terms of performance and in terms of limiting the amount of in-domain labeled data needed to successfully tune the network. Next, we replace the last weight layer (1000 linear classifiers) with K linear classifiers, one for each category in our task. This weight layer is randomly initialized and then we fine-tune the whole network on our classification data. At this point, we have a network that can take an image or a region proposal as input, and produce a set of scores for each of the K categories. We find that even using the net trained on classification data in this way produces a strong baseline (see Section 4). We next transform our classification network into a detection network. We do this by fine-tuning layers 1-7 using the available labeled detection data for categories in set B. Following the Regionsbased CNN (R-CNN) [1] algorithm, we collect positive bounding boxes for each category in set B as well as a set of background boxes using a region proposal algorithm, such as selective search [23]. We use each labeled region as a fine-tuning input to the CNN after padding and warping it to the CNN’s input size. Note that the R-CNN fine-tuning algorithm requires bounding box annotated data for all categories and so can not directly be applied to train all K detectors. Fine-tuning transforms all network weights (except for the linear classifiers for set A) and produces a softmax detector for categories in set B, which includes a weight vector for the new background class. Layers 1-7 are shared between all categories in set B and we find empirically that fine-tuning induces a generic, category invariant transformation of the classification network into a detection network. That is, even though fine-tuning sees no detection data for categories in set A, the network transforms in a way that automatically makes the original set A image classifiers much more effective at detection (see Figure 3). Fine-tuning for detection also learns a background weight vector that encodes a generic “background” category. This background model is important for modeling the task shift from image classification, which does not include background distractors, to detection, which is dominated by background patches. 3.2 Training LSDA: Category Specific Adaptation Finally, we learn a category specific transformation that will change the classifier model parameters into the detector model parameters that operate on the detection feature representation. The category specific output layer (fc8) is comprised of fcA, fcB, δB, and fc −BG. For categories in set B, this transformation can be learned through directly fine-tuning the category specific parameters fcB (Figure 2). This is equivalent to fixing fcB and learning a new layer, zero initialized, δB, with equivalent loss to fcB, and adding together the outputs of δB and fcB. Let us define the weights of the output layer of the original classification network as W c, and the weights of the output layer of the adapted detection network as W d. We know that for a category i ∈B, the final detection weights should be computed as W d i = W c i + δBi. However, since there is no detection data for categories in A, we can not directly learn a corresponding δA layer during fine-tuning. Instead, we can approximate the fine-tuning that would have occurred to fcA had detection data been available. We do this by finding the nearest neighbors categories in set B for each category in set A and applying the average change. Here we define nearest neighbors as 4 those categories with the nearest (minimal Euclidean distance) ℓ2-normalized fc8 parameters in the classification network. This corresponds to the classification model being most similar and hence, we assume, the detection model should be most similar. We denote the kth nearest neighbor in set B of category j ∈A as NB(j, k), then we compute the final output detection weights for categories in set A as: ∀j ∈A : W d j = W c j + 1 k k X i=1 δBNB(j,i) (1) Thus, we adapt the category specific parameters even without bounding boxes for categories in set A. In the next section we experiment with various values of k, including taking the full average: k = |B|. 3.3 Detection with LSDA At test time we use our network to extract K + 1 scores per region proposal in an image (similar to the R-CNN [1] pipeline). One for each category and an additional score for the background category. Finally, for a given region, the score for category i is computed by combining the per category score with the background score: scorei −scorebackground. In contrast to the R-CNN [1] model which trains SVMs on the extracted features from layer 7 and bounding box regression on the extracted features from layer 5, we directly use the final score vector to produce the prediction scores without either of the retraining steps. This choice results in a small performance loss, but offers the flexibility of being able to directly combine the classification portion of the network that has no detection labeled data, and reduces the training time from 3 days to roughly 5.5 hours. 4 Experiments To demonstrate the effectiveness of our approach we present quantitative results on the ILSVRC2013 detection dataset. The dataset offers a 200-category detection challenge. The training set has ∼400K annotated images and on average 1.534 object classes per image. The validation set has 20K annotated images with ∼50K annotated objects. We simulate having access to classification labels for all 200 categories and having detection annotations for only the first 100 categories (alphabetically sorted). 4.1 Experiment Setup & Implementation Details We start by separating our data into classification and detection sets for training and a validation set for testing. Since the ILSVRC2013 training set has on average fewer objects per image than the validation set, we use this data as our classification data. To balance the categories we use ≈1000 images per class (200,000 total images). Note: for classification data we only have access to a single image-level annotation that gives a category label. In effect, since the training set may contain multiple objects, this single full-image label is a weak annotation, even compared to other classification training data sets. Next, we split the ILSVRC2013 validation set in half as [1] did, producing two sets: val1 and val2. To construct our detection training set, we take the images with bounding box labels from val1 for only the first 100 categories (≈5000 images). Since the validation set is relatively small, we augment our detection set with 1000 bounding box annotated images per category from the ILSVRC2013 training set (following the protocol of [1]). Finally we use the second half of the ILSVRC2013 validation set (val2) for our evaluation. We implemented our CNN architectures and execute all fine-tuning using the open source software package Caffe [24] and have made our model definitions weights publicly available. 4.2 Quantitative Analysis on Held-out Categories We evaluate the importance of each component of our algorithm through an ablation study. As a baseline we consider training the network with only the classification data (no adaptation) and applying the network to the region proposals. The summary of the importance of our three adaptation components is shown in Figure 3. Our full LSDA model achieves a 50% relative mAP boost over 5 Detection Output Layer mAP Trained mAP Held-out mAP All Adaptation Layers Adaptation 100 Categories 100 Categories 200 Categories No Adapt (Classification Network) 12.63 10.31 11.90 fcbgrnd 14.93 12.22 13.60 fcbgrnd,fc6 24.72 13.72 19.20 fcbgrnd,fc7 23.41 14.57 19.00 fcbgrnd,fcB 18.04 11.74 14.90 fcbgrnd,fc6,fc7 25.78 14.20 20.00 fcbgrnd,fc6,fc7,fcB 26.33 14.42 20.40 fcbgrnd,layers1-7,fcB 27.81 15.85 21.83 fcbgrnd,layers1-7,fcB Avg NN (k=5) 28.12 15.97 22.05 fcbgrnd,layers1-7,fcB Avg NN (k=10) 27.95 16.15 22.05 fcbgrnd,layers1-7,fcB Avg NN (k=100) 27.91 15.96 21.94 Oracle: Full Detection Network 29.72 26.25 28.00 Table 1: Ablation study for the components of LSDA. We consider removing different pieces of our algorithm to determine which pieces are essential. We consider training with the first 100 (alphabetically) categories of the ILSVRC2013 detection validation set (on val1) and report mean average precision (mAP) over the 100 trained on and 100 held out categories (on val2). We find the best improvement is from fine-tuning all layers and using category specific adaptation. the classification only network. The most important step of our algorithm proved to be adapting the feature representation, while the least important was adapting the category specific parameter. This fits with our intuition that the main benefit of our approach is to transfer category invariant information from categories with known bounding box annotation to those without the bounding box annotations. 0 5 10 15 20 Classification Net LSDA (bg only) LSDA (bg+ft) LSDA 10.31 12.2 15.85 16.15 Figure 3: Comparison (mAP%) of our full system (LSDA) on categories with no bounding boxes at training time. In Table 1, we present a more detailed analysis of the different adaptation techniques we could use to train the network. We find that the best category invariant adaptation approach is to learn the background category layer and adapt all convolutional and fully connected layers, bringing mAP on the held-out categories from 10.31% up to 15.85%. Additionally, using output layer adaptation (k = 10) further improves performance, bringing mAP to 16.15% on the held-out categories (statistically significant at p = 0.017 using a paired sample t-test [25]). The last row shows the performance achievable by our detection network if it had access to detection data for all 200 categories, and serves as a performance upper bound.1 We find that one of the biggest reasons our algorithm improves is from reducing localization error. For example, in Figure 4, we show that while the classification only trained net tends to focus on the most discriminative part of an object (ex: face of an animal) after our adaptation, we learn to localize the whole object (ex: entire body of the animal). 4.3 Error Analysis on Held Out Categories We next present an analysis of the types of errors that our system (LSDA) makes on the held out object categories. First, in Figure 5, we consider three types of false positive errors: Loc (localization errors), BG (confusion with background), and Oth (other error types, which is essentially 1To achieve R-CNN performance requires additionally learning SVMs on the activations of layer 7 and bounding box regression on the activations of layer 5. Each of these steps adds between 1-2mAP at high computation cost and using the SVMs removes the adaptation capacity of the system. 6 Figure 4: We show example detections on held out categories, for which we have no detection training data, where our adapted network (LSDA) (shown with green box) correctly localizes and labels the object of interest, while the classification network baseline (shown in red) incorrectly localizes the object. This demonstrates that our algorithm learns to adapt the classifier into a detector which is sensitive to localization and background rejection. correctly localizing an object, but misclassifying it). After separating all false positives into one of these three error types we visually show the percentage of errors found in each type as you look at the top scoring 25-3200 false positives.2 We consider the baseline of starting with the classification only network and show the false positive breakdown in Figure 5(b). Note that the majority of false positive errors are confusion with background and localization errors. In contrast, after adapting the network using LSDA we find that the errors found in the top false positives are far less due to localization and background confusion (see Figure 5(c)). Arguably one of the biggest differences between classification and detection is the ability to accurately localize objects and reject background. Therefore, we show that our method successfully adapts the classification parameters to be more suitable for detection. In Figure 5(a) we show examples of the top scoring Oth error types for LSDA on the held-out categories. This means the detector localizes an incorrect object type. For example, the motorcycle detector localized and mislabeled bicycle and the lemon detector localized and mislabeled an orange. In general, we noticed that many of the top false positives from the Oth error type were confusion with very similar categories. 4.4 Large Scale Detection To showcase the capabilities of our technique we produced a 7604 category detector. The first categories correspond to the 200 categories from the ILSVRC2013 challenge dataset which have bounding box labeled data available. The other 7404 categories correspond to leaf nodes in the ImageNet database and are trained using the available full image labeled classification data. We trained a full detection network using the 200 fully annotated categories and trained the other 7404 last layer nodes using only the classification data. Since we lack bounding box annotated data for the majority of the categories we show example top detections in Figure 6. The results are filtered using non-max suppression across categories to only show the highest scoring categories. The main contribution of our algorithm is the adaptation technique for modifying a convolutional neural network for detection. However, the choice of network and how the net is used at test time both effect the detection time computation. We have therefore also implemented and released a version of our algorithm running with fast region proposals [27] on a spatial pyramid pooling network [28], reducing our detection time down to half a second per image (from 4s per image) with nearly the same performance. We hope that this will allow the use of our 7.6K model on large data sources such as videos. We have released the 7.6K model and code to run detection (both the way presented in this paper and our faster version) at lsda.berkeleyvision.org. 2We modified the analysis software made available by Hoeim et al. [26] to work on ILSVRC-2013 detection 7 microphone (sim): ov=0.00 1−r=−3.00 microphone miniskirt (sim): ov=0.00 1−r=−1.00 miniskirt motorcycle (sim): ov=0.00 1−r=−6.00 motorcycle mushroom (sim): ov=0.00 1−r=−8.00 mushroom nail (sim): ov=0.00 1−r=−4.00 nail laptop (sim): ov=0.00 1−r=−3.00 laptop lemon (sim): ov=0.00 1−r=−5.00 lemon (a) Example Top Scoring False Positives: LSDA correctly localizes but incorrectly labels object total false positives percentage of each type Held−out Categories 25 50 100 200 400 800 1600 3200 0 20 40 60 80 100 Loc Oth BG (b) Classification Network total false positives percentage of each type Held−out Categories 25 50 100 200 400 800 1600 3200 0 20 40 60 80 100 Loc Oth BG (c) LSDA Network Figure 5: We examine the top scoring false positives from LSDA. Many of our top scoring false positives come from confusion with other categories (a). (b-c) Comparison of error type breakdown on the categories which have no training bounding boxes available (held-out categories). After adapting the network using our algorithm (LSDA), the percentage of false positive errors due to localization and background confusion is reduced (c) as compared to directly using the classification network in a detection framework (b). American bison: 7.0 taillight: 0.9 wheel and axle: 1.0 car: 6.0 whippet: 2.0 dog: 4.1 sofa: 8.0 Figure 6: Example top detections from our 7604 category detector. Detections from the 200 categories that have bounding box training data available are shown in blue. Detections from the remaining 7404 categories for which only classification training data is available are shown in red. 5 Conclusion We have presented an algorithm that is capable of transforming a classifier into a detector. We use CNN models to train both a classification and a detection network. Our multi-stage algorithm uses corresponding classification and detection data to learn the change from a classification CNN network to a detection CNN network, and applies that difference to future classifiers for which there is no available detection data. We show quantitatively that without seeing any bounding box annotated data, we can increase performance of a classification network by 50% relative improvement using our adaptation algorithm. Given the significant improvement on the held out categories, our algorithm has the potential to enable detection of tens of thousands of categories. All that would be needed is to train a classification layer for the new categories and use our fine-tuned detection model along with our output layer adaptation techniques to update the classification parameters directly. Our approach significantly reduces the overhead of producing a high quality detector. We hope that in doing so we will be able to minimize the gap between having strong large-scale classifiers and strong large-scale detectors. There is still a large gap to reach oracle (known bounding box labels) performance. For future work we would like to explore multiple instance learning techniques to discover and mine patches for the categories that lack bounding box data. 8 References [1] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In In Proc. CVPR, 2014. [2] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The pascal visual object classes (voc) challenge. International Journal of Computer Vision, 88(2):303–338, June 2010. [3] A. Berg, J. Deng, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. 2012. [4] A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural networks. In Proc. NIPS, 2012. [5] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. CoRR, abs/1312.6229, 2013. [6] K. Ali and K. Saenko. Confidence-rated multiple instance boosting for object detection. In IEEE Conference on Computer Vision and Pattern Recognition, 2014. [7] H. Song, R. Girshick, S. Jegelka, J. Mairal, Z. Harchaoui, and T. Darrell. On learning to localize objects with minimal supervision. In Proceedings of the International Conference on Machine Learning (ICML), 2014. [8] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition. In Proc. ICML, 2014. [9] Philipp Fischer, Alexey Dosovitskiy, and Thomas Brox. Descriptor matching with convolutional neural networks: a comparison to sift. ArXiv e-prints, abs/1405.5769, 2014. [10] D. G. Lowe. Distinctive image features from scale-invariant key points. IJCV, 2004. [11] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In In Proc. CVPR, 2005. [12] B. Kulis, K. Saenko, and T. Darrell. What you saw is not what you get: Domain adaptation using asymmetric kernel transforms. In Proc. CVPR, 2011. [13] J. Yang, R. Yan, and A. Hauptmann. Adapting SVM classifiers to data with shifted distributions. In ICDM Workshops, 2007. [14] Y. Aytar and A. Zisserman. Tabula rasa: Model transfer for object category detection. In Proc. ICCV, 2011. [15] J. Hoffman, E. Rodner, J. Donahue, K. Saenko, and T. Darrell. Efficient learning of domain-invariant image representations. In Proc. ICLR, 2013. [16] L. Duan, D. Xu, and Ivor W. Tsang. Learning with augmented features for heterogeneous domain adaptation. In Proc. ICML, 2012. [17] J. Yang, R. Yan, and A. G. Hauptmann. Cross-domain video concept detection using adaptive svms. ACM Multimedia, 2007. [18] Y. Aytar and A. Zisserman. Tabula rasa: Model transfer for object category detection. In IEEE International Conference on Computer Vision, 2011. [19] J. Donahue, J. Hoffman, E. Rodner, K. Saenko, and T. Darrell. Semi-supervised domain adaptation with instance constraints. In Computer Vision and Pattern Recognition (CVPR), 2013. [20] J. Xu, S. Ramos, D. V´azquez, and A.M. L´opez. Domain adaptation of deformable part-based models. IEEE Trans. on Pattern Analysis and Machine Intelligence, In Press, 2014. [21] Y. Aytar and A. Zisserman. Enhancing exemplar svms using part level transfer regularization. In British Machine Vision Conference, 2012. [22] D. Goehring, J. Hoffman, E. Rodner, K. Saenko, and T. Darrell. Interactive adaptation of real-time object detectors. In International Conference on Robotics and Automation (ICRA), 2014. [23] J.R.R. Uijlings, K.E.A. van de Sande, T. Gevers, and A.W.M. Smeulders. Selective search for object recognition. International Journal of Computer Vision, 104(2):154–171, 2013. [24] Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014. [25] M. D. Smucker, J. Allan, and B. Carterette. A comparison of statistical significance tests for information retrieval evaluation. In In Conference on Information and Knowledge Management, 2007. [26] D. Hoeim, Y. Chodpathumwan, and Q. Dai. Diagnosing error in object detectors. In In Proc. ECCV, 2012. [27] P. Kr¨ahenb¨uhl and V. Koltun. Geodesic object proposals. In In Proc. ECCV, 2014. [28] K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. In In Proc. ECCV, 2014. 9
|
2014
|
15
|
5,236
|
Repeated Contextual Auctions with Strategic Buyers Kareem Amin University of Pennsylvania akareem@cis.upenn.edu Afshin Rostamizadeh Google Research rostami@google.com Umar Syed Google Research usyed@google.com Abstract Motivated by real-time advertising exchanges, we analyze the problem of pricing inventory in a repeated posted-price auction. We consider both the cases of a truthful and surplus-maximizing buyer, where the former makes decisions myopically on every round, and the latter may strategically react to our algorithm, forgoing short-term surplus in order to trick the algorithm into setting better prices in the future. We further assume a buyer’s valuation of a good is a function of a context vector that describes the good being sold. We give the first algorithm attaining sublinear ( ˜O(T 2/3)) regret in the contextual setting against a surplus-maximizing buyer. We also extend this result to repeated second-price auctions with multiple buyers. 1 Introduction A growing fraction of Internet advertising is sold through automated real-time ad exchanges. In a real-time ad exchange, after a visitor arrives on a webpage, information about that visitor and webpage, called the context, is sent to several advertisers. The advertisers then compete in an auction to win the impression, or the right to deliver an ad to that visitor. One of the great advantages of online advertising compared to advertising in traditional media is the presence of rich contextual information about the impression. Advertisers can be particular about whom they spend money on, and are willing to pay a premium when the right impression comes along, a process known as targeting. Specifically, advertisers can use context to specify which auctions they would like to participate in, as well as how much they would like to bid. These auctions are most often secondprice auctions, wherein the winner is charged either the second highest bid or a prespecified reserve price (whichever is larger), and no sale occurs if the reserve price isn’t cleared by one of the bids. One side-effect of targeting, which has been studied only recently, is the tendency for such exchanges to generate many auctions that are rather uncompetitive or thin, in which few advertisers are willing to participate. Again, this stems from the ability of advertisers to examine information about the impression before deciding to participate. While this selectivity is clearly beneficial for advertisers, it comes at a cost to webpage publishers. Many auctions in real-time ad exchanges ultimately involve just a single bidder, in which case the publisher’s revenue is entirely determined by the selection of reserve price. Although a lone advertiser may have a high valuation for the impression, a low reserve price will fail to extract this as revenue for the seller if the advertiser is the only participant in the auction. As observed by [1], if a single buyer is repeatedly interacting with a seller, selecting revenuemaximizing reserve prices (for the seller) reduces to revenue-maximization in a repeated postedprice setting: On each round, the seller offers a good to the buyer at a price. The buyer observes her value for the good, and then either accepts or rejects the offer. The seller’s price-setting algorithm is known to the buyer, and the buyer behaves to maximize her (time-discounted) cumulative surplus, i.e., the total difference between the buyer’s value and the price on rounds where she accepts the offer. The goal of the seller is to extract nearly as much revenue from the buyer as would have been 1 possible if the process generating the buyer’s valuations for the goods had been known to the seller before the start of the game. In [1] this goal is called minimizing strategic regret. Online learning algorithms are typically designed to minimize regret in hindsight, which is defined as the difference between the loss of the best action and the loss of the algorithm given the observed sequence of events. Furthermore, it is assumed that the observed sequence of events are generated adversarially. However, in our setting, the buyer behaves self-interestedly, which is not necessarily the same as behaving adversarially, because the interaction between the buyer and seller is not zero-sum. A seller algorithm designed to minimize regret against an adversary can perform very suboptimally. Consider an example from [1]: a buyer who has a large valuation v for every good. If the seller announces an algorithm that minimizes (standard) regret, then the buyer should respond by only accepting prices below some ✏⌧v. In hindsight, posting a price of ✏in every round would appear to generate the most revenue for the seller given the observed sequence of buyer actions, and therefore ✏T cumulative revenue is “no-regret”. However, the seller was tricked by the strategic buyer; there was (v −✏)T revenue left on the table. Moreoever, this is a good strategy for the buyer (it must have won the good for nearly nothing on ⌦(T) rounds). The main contribution of this paper is extending the setting described above to one where the buyer’s valuations in each round are a function of some context observed by both the buyer and seller. While [1] is motivated by our same application, they imagine an overly simplistic model wherein the buyer’s value is generated by drawing an independent vt from an unknown distribution D. This ignores that vt will in reality be a function of contextual information xt, information that is available to the seller, and the entire reason auctions are thin to begin with (without xt there would be no targeting). We give the first algorithm that attains sublinear regret in the contextual setting, against a surplus-maximizing buyer. We also note that in the non-contextual setting, regret is measured against the revenue that could have been made if D were known, and the single fixed optimal price were selected. Our comparator will be more challenging as we wish to compete with the best function (in some class) from contexts xt to prices. The rest of the paper is organized as follows. We first introduce a linear model by which values vt are derived from contexts xt. We then demonstrate an algorithm based on stochastic gradient descent (SGD) which achieves sublinear regret against an truthful buyer (one that accepts price pt iff pt vt on every round t). The analysis for the truthful buyer uses prexisting high probability bounds for SGD when minimizing strongly convex functions [15]. Our main result requires an extension of this analysis to cases in which “incorrect” gradients are occasionally observed. This lets us study a buyer that is allowed to best-respond to our algorithm, possibly rejecting offers that the truthful buyer would not, in order to receive better offers on future rounds. We also adapt our algorithm to non-linear settings via a kernelized version of the algorithm. Finally, we extend our results to second-price auctions with multiple buyers. Related Work: The pricing of digital good in repeated auctions has been considered by many other authors, including [1, 10, 3, 2, 5, 11]. However, most of these papers do not consider a buyer who behaves strategically across rounds. Buyers either behave randomly [11], or only participate in a single round [10, 3, 2, 5], or participate in multiple rounds but only desire a single good [13, 7] and therefore, in each of these cases, are not incentivized to manipulate the seller’s behavior on future rounds. In reality buyers repeatedly interact with the same seller. There is empirical evidence suggesting that buyers are not myopic, and do in fact behave strategically to induce better prices in the future [6], as well as literature studying different strategies for strategic buyers [4, 8, 9]. 2 Preliminaries Throughout this work, we will consider a repeated auction where at every round a single seller prices an item to sell to a single buyer (extensions to multiple buyers are discussed in Section 5). The good sold at step t in the repeated auction is represented by a context (feature) vector xt 2 X = {x: kxk2 1} and is drawn according a fixed distribution D, which is unknown to the seller. The good has a value vt that is a linear function of a parameter vector w⇤, also unknown to the seller, vt = w⇤>xt (extensions to non-linear functions of the context are considered in Section 5). We assume that w⇤2 W = {w: kwk2 1} and also that 0 w⇤>x 1 with probability one with respect to D. 2 For rounds t = 1, . . . , T the repeated posted-price auction is defined as follows: (1) The buyer and seller both observe xt ⇠D. (2) The seller offers a price pt. (3) The buyer selects at 2 {0, 1}. (4) The seller receives revenue atpt. Here, at is an indicator variable that represents whether or not the buyer accepted the offered price (1 indicates yes). The goal of the seller is to select a price pt in each round t such that the expected regret R(T) = E hPT t=1 vt −atpt i is o(T). The choice of at will depend on the buyer’s behavior. We will analyze two types of buyers in the subsequent sections of the paper: truthful and surplusmaximizing buyers, and will attempt to minimize regret against the truthful buyer and regret against the surplus-maximizing buyer. Note, the regret is the difference between the maximum revenue possible and the amount made by the algorithm that offers prices to the buyer. 3 Truthful Buyer In this section we introduce the Learn-Exploit Algorithm for Pricing (LEAP), which we show has regret of the form O(T 2/3p log(T log(T))) against a truthful buyer. A buyer is truthful if she accepts any offered price that gives a non-negative surplus, which is defined as the difference between the buyer’s value for the good minus the price paid: vt−pt. Therefore, for a truthful buyer we define at = 1{pt vt}. At this point, we note that the loss function vt −1{pt vt}pt, which we wish to minimize over all rounds, is not convex, differentiable or even continuous. If the price is even slightly above the truthful buyers valuation it is rejected and the seller makes zero revenue. To circumvent this our algorithm will attempt to learn w⇤directly by minimizing a surrogate loss function for which w⇤ in the minimizer. Our analysis hinges on recent results [15] which give optimal rates for gradient descent when the function being minimized is strongly convex. Our key trick is to offer prices so that, in each round, the buyer’s behavior reveals the gradient of the surrogate loss at our current estimate for w⇤. Below we define the LEAP algorithm (Algorithm 1), which we show addresses these difficulties in the online setting. Algorithm 1 LEAP algorithm • Let 0 ↵1, w1 = 0 2 W, ✏≥0, λ > 0, T↵= d↵Te. • For t = 1, . . . , T↵ (Learning phase) – Offer pt ⇠U, where U is the uniform distribution on the interval [0, 1]. – Observe at. – ˜gt = 2 ! wt · xt −at " xt. – wt+1 = ⇧W(wt − 1 λt˜gt). • For t = T↵+ 1, . . . , T (Exploit phase) – Offer pt = wT↵+1 · xt −✏. The algorithm depends on input parameters ↵, ✏and λ. The ↵parameter determines what fraction of rounds are spent in the learning phase as oppose to the exploit phase. During the learning phase, uniform random prices are offered and the model parameters are updated as a function of the feedback given by the buyer. During the exploit phase, the model parameters are fixed and the offered price is computed as a linear function of these parameters minus the value of the ✏parameter. The ✏parameter can be thought of as inversely proportional to our confidence in the fixed model parameters and is used to hedge against the possibility of over-estimating the value of a good. The λ parameter is a learning-rate parameter set according to the minimum eigenvalue of the covariance matrix, and is defined below in Assumption 1. In order to prove a regret bound, we first show that the learning phase of the algorithm is minimizing a strongly convex surrogate loss and then show that this implies the buyer enjoys near optimal revenue during the exploit phase of the algorithm. Let gt = 2(w> t xt −1{pt vt})xt and F(w) = Ex⇠D ⇥ (w⇤>x −w>x)2⇤ . Note that when the buyer is truthful ˜gt = gt. Against a truthful buyer, gt is an unbiased estimate of the gradient of F. Proposition 1. The random variable gt satisfies E[gt | wt] = rF(wt). Also, kgtk 4 with probability 1. 3 Proof. First note that E[gt | wt] = Ext ⇥ 2 ' wt·xt−Ept[1{pt vt}] (⇤ = Ext ⇥ 2 ' wt·xt−Prpt(pt vt) (⇤ . Since pt is drawn uniformly from [0, 1] and vt is guaranteed to lie in [0, 1] we have that Pr(pt vt) = R 1 0 1{pt vt}dpt = vt. Plugging this back into gt gives us exactly the expression for rF(wt). Furthermore, kgtk = 2|w> t xt −1{pt vt}| kxtk 4 since |w> t xt| kwtkkxtk 1 and kxtk 1 We now introduce the notion of strong convexity. A twice-differentiable function H(w) is λstrongly convex if and only if the Hessian matrix r2H(w) is full rank and the minimum eigenvalue of r2H(w) is at least λ. Note that the function F is strongly convex if and only if the covariance matrix of the data is full-rank, since r2F(w) = 2Ex[xx>]. We make the following assumption. Assumption 1. The minimum eigenvalue of 2Ex[xx>] is at least λ > 0. Note that if this is not the case then there is redundancy in the features and the data can be projected (for example using PCA) into a lower dimensional feature space with a full-rank covariance matrix and without any loss in information. The seller can compute an offline estimate of both this projection and λ by collecting a dataset of context vectors before starting to offer prices to the buyer. Thus, in view of Proposition 1 and the strong convexity assumption, we see the learning phase of the LEAP algorithm is conducting a stochastic gradient descent to minimize the λ-strongly convex function F, where at each time step we update wt+1 = ⇧W(wt −1 λt˜gt) and ˜gt = gt is an unbiased estimate of the gradient. We now make use of an existing bound ([14, 15]) for stochastic gradient descent on strongly convex functions. Lemma 1 ([14] Proposition 1). Let δ 2 (0, 1/e), T↵≥4 and suppose F is λ-strongly convex over the convex set W. Also suppose E[gt | wt] = rF(w) and kgtk2 G2 with probability 1. Then with probability at least 1 −δ for any t T↵it holds that kwt −w⇤k2 (624 log(log(T↵)/δ) + 1)G2 λ2t where w⇤= argminwF(w) . This guarantees that, with high probability, the distance between the learned parameter vector wt and the target weight vector w⇤is bounded and decreasing as t increases. This allows us to carefully tune the ✏parameter that is used in the exploit phase of the algorithm (see Lemma 6 in the appendix). We are now equipped to prove a bound on the regret of the LEAP algorithm. Theorem 1. For any T > 4, 0 < ↵< 1 and assuming a truthful buyer, the LEAP algorithm with ✏= q (624 log(pT↵log(T↵))+1)G2 λ2T↵ , where G = 4, has regret against a truthful buyer at most R(T) 2↵T + 4 q T ↵ q (624 log(pT↵log(T↵))+1)G2 λ2 , which implies for ↵= T −1/3 a regret at most R(T) 2T 2/3 + 4T 2/3 r (624 log(T 1/3 log(T 2/3)) + 1)G2 λ2 = O ⇣ T 2/3p log(T log(T)) ⌘ . Proof. We first decompose the regret E h T X t=1 vt −atpt i = E h T↵ X t=1 vt −atpt i + E h T X t=T↵+1 vt −atpt i T↵+ T X t=T↵+1 E h vt −atpt i , (1) where we have used the fact |vt−atpt| 1. Let A denote the event that, for all t 2 {T↵+1, . . . , T}, at = 1^vt −pt ✏. Lemma 6 (see Appendix, Section A.1) proves that A occurs with probability at least 1−T −1/2 ↵ . For brevity let N = p (624 log(pT↵log(T↵)) + 1)G2/λ2, then we can decompose the expectation in the following way: E h vt −atpt i = Pr[A]E[vt −atpt|A] + (1 −Pr[A])E[vt −atpt|¬A] Pr[A]✏+ (1 −Pr[A]) ✏+ T −1/2 ↵ = r N T↵ + r 1 T↵ 2 r N T↵ , where the inequalities follow from the definition of A, Lemma 6, and the fact that |vt −atpt| < 1. Plugging this back into equation (1) gives T↵+ PT t=T↵+1 E[vt −atpt] T↵+ d(1−↵)T e pT↵ 2 p N 2↵T + 4 q T ↵ p N, proving the first result of the theorem. ↵= T −1/3 gives the final expression. 4 In the next section we consider the more challenging setting of a surplus-maximizing buyer, who may accept/reject prices in a manner meant to lower the prices offered. 4 Surplus-Maximizing Buyer In the previous section we considered a truthful buyer who myopically accepts every price below her value, i.e., she sets at = 1{pt vt} for every round t. Let S(T) = E hPT t=1 γtat(vt −pt) i be the buyer’s cumulative discounted surplus, where {γt} is a decreasing discount sequence, with γt 2 (0, 1). When prices are offered by the LEAP algorithm, the buyer’s decisions about which prices to accept during the learning phase have an influence on the prices that she is offered in the exploit phase, and so a surplus-maximizing buyer may be able to increase her cumulative discounted surplus by occasionally behaving untruthfully. In this section we assume that the buyer knows the pricing algorithm and seeks to maximize S(T). Assumption 2. The buyer is surplus-maximizing, i.e., she behaves so as to maximize S(T), given the seller’s pricing algorithm. We say that a lie occurs in any round t where at 6= 1{pt vt}. Note that a surplus-maximizing buyer has no reason to lie during the exploit phase, since the buyer’s behavior during exploit rounds has no effect on the prices offered. Let L = {t : 1 t T↵^ at 6= 1{pt vt}} be the set of learning rounds where the buyer lies, and let L = |L| be the number of lies. Observe that ˜gt 6= gt in any lie round (recall that E[gt | wt] = rF(wt), i.e., gt is the stochastic gradient in round t). We take a moment to note the necessity of the discount factor γt. This essentially models the buyer as valuing surplus at the current time step more than in the future. Another way of interpreting this, is that the seller is more “patient” as compared to the buyer. In [1] the authors show a lower bound on the regret against a surplus-maximizing buyer in the contextless setting of the form O(Tγ), where Tγ = PT i=1 γt. Thus, if no decreasing discount factor is used, i.e. γt = 1, then sublinear regret is not possible. Note, the lower bound of the contextless setting applies here as well, since the case of a distribution D that induces a fixed context x⇤on every round is a special case of our setting. In that case the problem reduces to the fixed unknown value setting since on every round vt = w⇤>x⇤. In the rest of this section we prove an O ' T 2/3p log(T)/ log(1/γ) ( bound on the seller’s regret under the assumption that the buyer is surplus-maximizing and that her discount sequence is γt = γt−1 for some γ 2 (0, 1). The idea of the proof is to show that the buyer incurs a cost for telling lies, and therefore will not tell very many, and thus the lies she does tell will not significantly affect the seller’s estimate of w⇤. Bounding the cost of lies: Observe that in any learning round where the surplus-maximizing buyer tells a lie, she loses surplus in that round relative to the truthful buyer, either by accepting a price higher than her value (when at = 1 and vt < pt) or by rejecting a price less than her value (when at = 0 and vt > pt). This observation can be used to show that lies result in a substantial loss of surplus relative to the truthful buyer, provided that in most of the lie rounds there is a nontrivial gap between the buyer’s value and the seller’s price. Because prices are chosen uniformly at random during the learning phase, this is in fact quite likely, and with high probability the surplus lost relative to the truthful buyer during the learning phase grows exponentially with the number of lies. The precise quantity is stated in the Lemma below. A full proof appears in the appendix, Section A.3. Lemma 2. Let the discount sequence be defined as γt = γt−1 for 0 < γ < 1 and assume the buyer has told L lies. Then for δ > 0 with probability at least 1 −δ the buyer loses a surplus of at least γ−L+3−1 8T↵log( 1 δ ) ⇣ γT↵ 1−γ ⌘ relative to the truthful buyer during the learning phase. Bounding the number of lies: Although we argued in the previous lemma that lies during the learning phase cause the surplus-maximizing buyer to lose surplus relative to the truthful buyer, those lies may result in lower prices offered during the exploit phase, and thus the overall effect of lying may be beneficial to the buyer. However, we show that there is a limit on how large that benefit can be, and thus we have the following high-probability bound on the number of learning phase lies. Lemma 3. Let the discount sequence be defined as γt = γt−1 for 0 < γ < 1. Then for δ > 0 with probability at least 1 −δ, the number of lies L log(32T↵1 δ log( 2 δ )+1) log(1/γ) . 5 The full proof is found in the appendix (Section A.4), and we provide a proof sketch here. The argument proceeds by comparing the amount of surplus lost (compared to the truthful buyer) due to telling lies in the learning phase to the amount of surplus that could hope to be gained (compared to the truthful buyer) in the exploit phase. Due to the discount factor, the surplus lost will eventually outweigh the surplus gained as the number of lies increases, implying a limit to the number of lies a surplus maximizing buyer can tell. Bounding the effect of lies: In Section 3 we argued that if the buyer is truthful then, in each learning round t of the LEAP algorithm, ˜gt is a stochastic gradient with expected value rF(wt). We then use the analysis of stochastic gradient descent in [14] to prove that wT↵+1 converges to w⇤ (Lemma 1). However, if the buyer can lie then ˜gt is not necessarily the gradient and Lemma 1 no longer applies. Below we extend the analysis in Rakhlin et al. [14] to a setting where the gradient may be corrupted by lies up to L times. Lemma 4. Let δ 2 (0, 1/e), T↵≥2. If the buyer tells L lies then with probability at least 1 −δ, kwT↵+1 −w⇤k2 1 T↵+1 ⇣ (624 log(log(T↵)/δ)+e2)G2 λ2 + 4e2L λ ⌘ . The proof of the lemma is similar to that of Lemma 1, but with extra steps needed to bound the additional error introduced due to the erroneous gradients. Due to space constraints, we present the proof in the appendix, Section A.6. Note that, modulo constants, the bound only differs by the additive term L/T↵. That is, there is an extra additive error term that depends on the ratio of lies to number of learning rounds. Thus, if no lies are told, then there is no additive error. While if many lies are told, e.g. L = T↵, then the bound become vacuous. Main result: We are now ready to prove an upper bound on the regret of the LEAP algorithm when the buyer is surplus-maximizing. Theorem 2. For any 0 < ↵< 1 (such that T↵≥4), 0 < γ < 1 and assuming a surplus-maximizing buyer with exponential discounting factor γt = γt−1, then the LEAP algorithm using parameter ✏= q 1 T↵ ' (624 log(2pT↵log(T↵))+e2)G2 λ2 + 4e2 log(128pT↵log(4pT↵)+1) λ log(1/γ) ( , where G = 4, has regret against a surplus-maximizing buyer at most R(T) 2↵T + 4 r T ↵ s (624 log(2pT↵log(T↵)) + e2)G2 λ2 + 4e2 log(128pT↵log(4pT↵) + 1) λ log(1/γ) , which for ↵= T −1/3 implies R(T) O ⇣ T 2/3q log(T ) log(1/γ) ⌘ . Proof. Taking the high probability statements of Lemma 3 and Lemma 4 with δ/2 2 [0, 1/e] tells us that with probability at least 1 −δ, kwT↵−w⇤k2 1 T↵ ⇣ (624 log(2 log(T↵)/δ)+e2)G2 λ2 + 4e2 log(64T↵1 δ log( 4 δ )+1) λ log(1/γ) ⌘ . Since we assume T↵≥4, if we set δ = T −1/2 ↵ it implies δ/2 = T −1/2 ↵ /2 1/e, which is required for Lemma 4 to hold. Thus, if we set the algorithm parameter ✏as indicated in the statement of theorem, we have that with probability at least 1 −T −1/2 ↵ for all t 2 {T↵+ 1, . . . , T} that at = 1 and vt −pt ✏, which follows from the same argument used for Lemma 6. Finally, the same steps as in the proof of Theorem 1 we can be used to show the first inequality. Setting ↵= T −1/3 shows the second inequality and completes the theorem. Note that the bound shows that if γ ! 1 (i.e. no discounting) the bound becomes vacuous, which is to be expected since the ⌦(Tγ) lower bound on regret demonstrates the necessity of a discounting factor. If γ ! 0 (i.e. buyer become myopic, thereby truthful), then we retrieve the truthful bound modulo constants. Thus for any γ < 1, we have shown the first sublinear bound on the seller’s regret against a surplus-maximizing buyer in the contextual setting. 6 5 Extensions Doubling trick: A drawback of Theorem 2 is that optimally tuning the parameters ✏and ↵requires knowledge of the horizon T. The usual way of handling this problem in the standard online learning setting is to apply the ‘doubling trick’: If a learning algorithm that requires knowledge of T has regret O(T c) for some constant c, then running independent instances of the algorithm during consecutive phases of exponentially increasing length (i.e., the ith phase has length 2i) will also have regret O(T c). We can also apply the doubling trick to our strategic setting, but we must exercise caution and argue that running the algorithm in phases does not affect the behavior of a surplus-maximizing buyer in a way that invalidates the proof of Theorem 2. We formally state and prove the relevant corollary in Section A.8 of the Appendix. Kernelized Algorithm: In some cases, assuming that the value of a buyer is a linear function of the context may not be accurate. In this section we briefly introduce a kernelized version of LEAP, which allows for a non-linear model of the buyer value as a function of the context x. At the same time, the regret guarantees provided in the previous sections still apply since we can view the model as linear function of the induced features φ(x), where φ(·) is a non-linear map and the kernel function K is used to compute the inner product in this induced feature space: K(x, x0) = φ(x)>φ(x0). For a more complete discussion of kernel methods see, for example, [12, 16]. For what follows, we define the projection operation ⇧K ' β, (x1, . . . , xt) ( = β/ qPt i,j=1 βiβjK(xi, xj). The proof of Proposition 2 is moved to the appendix (Section A.7) in interest of space. Algorithm 2 Kernelized LEAP algorithm • Let K(·, ·) be a PDS function s.t. 8x : |K(x, x)| 1, 0 ↵1, T↵= d↵Te, β = 0 2 RT↵, ✏≥0, λ > 0. • For t = 1, . . . , T↵ – Offer pt ⇠U – Observe at – βt = −2 λt ! Pt−1 i=1 βiK(xi, xt) −at " – β = ⇧K ! β, (x1, . . . , xt) " • For t = T↵+ 1, . . . , T – Offer pt = PT↵ i=1 βiK(xi, xt) −✏ Proposition 2. Algorithm 2 is a kernelized implementation of the LEAP algorithm with W = {w: kwk2 1} and w1 = 0. Furthermore, if we consider the feature space induced by the kernel K via an explicit mapping φ(·), the learned linear hypothesis is represented as wt = Pt−1 i=1 βiφ(xi) which satisfies kwtk = Pt−1 i,j=1 βiβjK(xi, xj) 1. The gradient is gt = 2 ⇣Pt−1 i=1 βiφ(xi)>φ(xt) −at ⌘ φ(xt), and kgtk 4. Multiple Buyers: So far we have assumed that the seller is interacting with a single buyer across multiple posted price auctions. Recall that the motivation for considering this setting was repeated second price auctions against a single buyer, a situation that happens often in online advertising because of targetting. One might nevertheless wonder whether the algorithm can be applied to a setting where there can be multiple buyers, and whether it remains robust in such a setting. We describe a way in which the analysis for the posted-price setting can carry over to multiple buyers. . Formally, suppose there are K buyers, and on round t, buyer k receives a valuation of vk,t. We let kval(t) = arg maxk vk,t, v+ t = vkval(t),t, and v− t = maxk6=kval(t) vk,t: the buyer with the highest valuation, the highest valuation itself, and the second-highest valuation respectively. In a second price auction, each buyer also submits a bid bk,t, and we define kbid(t), b+ t and b− t analogously to kval(t), v+ t , v− t , corresponding to the highest bidder, the largest bid, and the second-largest bid. After the seller announces a reserve price pt, buyers submit their bids {bk,t}, and the seller receives round t revenue of rt = 1{pt b+ t } max{b− t , pt}. The goal of the seller is to minimize R(T) = E[PT t=1 v+ t −rt]. We assume that buyers are surplus-maximizing, and select a strategy that maps previous reserve prices p1, ..., pt−1, pt, and vk,t to a choice of bid on round t. 7 We call v+ t the market valuation for good t. The key to extending the LEAP algorithm to the multiple buyer setting will be to treat market valuations in the same way we treated the individual buyer’s valuation in the single-buyer setting. In order to do so, we make an analogous modelling assumption to that of Section 2. Specifically, we assume that there is some w⇤such that v+ t = w⇤> t xt.1 Note that we assume a model on the market price itself. At first glance, this might seem like a strange assumption since v+ t is itself the result of a maximization over vk,t. However, we argue that it’s actually rather unrestrictive. In fact the individual valuations vk,t can be generated arbitrarily so long as vk,t w⇤> t xt and equality holds for some k. In other words, we can imagine that nature first computes the market valuation v+ t , then arbitrarily (even adversarialy) selects which buyer gets this valuation, and the other buyer valuations. Now we can define at = 1{pt b+ t }, whether the largest bid was greater than the reserve, and consider running the LEAP algorithm, but with this choice of at. Notice that for any t, atpt rt, thereby giving us the following key fact: R(T) R0(T) , E[PT t=1 v+ t −atpt]. We also redefine L to be the number of market lies: rounds t T↵where at 6= 1{pt v+ t }. Note the market tells a lie if either all valuations were below pt, but somebody bid over pt anyway, or if some valuation was above pt but no buyer decided to outbid pt. With this choice of L, Lemma 4 holds exactly as written but in the multiple buyer setting. It’s well-known [17] that single-shot second price auctions are strategy-proof. Therefore, during the exploit phase of the algorithm, all buyers are incentivized to bid truthfully. Thus, in order to bound R0(T) and therefore R(T), we need only rederive Lemma 3 to bound the number of market lies. We begin partitioning the market lies. Let L = {t : t T↵, 1{pt v+ t } 6= 1{pt b+ t }}, while letting Lk = {t : t T↵, v+ t < p+ t b+ t , kbid(t) = k} [ {t T↵, b+ t < pt v+ t , kval(t) = k}. In other words, we attribute a lie to buyer k if (1) the reserve was larger than the market value, but buyer k won the auction anyway, or (2) buyer k had the largest valuation, but nobody cleared the reserve. Checking that L = [kLk and letting Lk = |Lk| tells us that L PK k=1 Lk. Furthermore, we can bound Lk using nearly identical arguments to the posted price setting, giving us the subsequent Corollary for the multiple buyer setting. Lemma 5. Let the discount sequence be defined as γt = γt−1 for 0 < γ < 1. Then for δ > 0 with probability at least 1 −δ, Lk log(32T↵/δ+1) log(1/γ) , and L KLk. Proof. We first consider the surplus buyer k loses during learning rounds, compared to if he had been truthful. Suppose buyer k unilateraly switches to always bidding his value (i.e. bk,t = vk,t). For a single-shot second price auction, being truthful is a dominant strategy and so he would only increase his surplus on learning rounds. Furthermore, on each round in Lk he would increase his (undiscounted) surplus by at least |vk,t −pt|. Now the analysis follows as in Lemmas 2 and 3. Corollary 1. In the multiple surplus-maximizing buyers setting the LEAP algorithm with ↵= T −1/3, ✏= q 1 T↵ ' (624 log(2pT↵log(T↵))+e2)G2 λ2 + 4e2K log(128pT↵log(4pT↵)+1) λ log(1/γ) ( , has regret R(T) R0(T) O ⇣ T 2/3q K log(T ) log(1/γ) ⌘ 6 Conclusion In this work, we have introduced the scenario of contextual auctions in the presence of surplusmaximizing buyers and have presented an algorithm that is able to achieve sublinear regret in this setting, assuming buyers receive a discounted surplus. Once again, we stress the importance of the contextual setting, as it contributes to the rise of targeted bids that result in auction with single highbidders, essentially reducing the auction to the posted-price scenario studied in this paper. Future directions for extending this work include considering different surplus discount rates as well as understanding whether small modifications to standard contextual online learning algorithms can lead to no-strategic-regret guarantees. 1Note that we could also apply the kernelized LEAP algorithm (Algorithm 2) in the multiple buyer setting. 8 References [1] Kareem Amin, Afshin Rostamizadeh, and Umar Syed. Learning prices for repeated auctions with strategic buyers. In Advances in Neural Information Processing Systems, pages 1169– 1177, 2013. [2] Ziv Bar-Yossef, Kirsten Hildrum, and Felix Wu. Incentive-compatible online auctions for digital goods. In Proceedings of Symposium on Discrete Algorithms, pages 964–970. SIAM, 2002. [3] Avrim Blum, Vijay Kumar, Atri Rudra, and Felix Wu. Online learning in online auctions. In Proceedings Symposium on Discrete algorithms, pages 202–204. SIAM, 2003. [4] Matthew Cary, Aparna Das, Ben Edelman, Ioannis Giotis, Kurtis Heimerl, Anna R Karlin, Claire Mathieu, and Michael Schwarz. Greedy bidding strategies for keyword auctions. In Proceedings of the 8th ACM conference on Electronic commerce, pages 262–271. ACM, 2007. [5] Nicolo Cesa-Bianchi, Claudio Gentile, and Yishay Mansour. Regret minimization for reserve prices in second-price auctions. In Proceedings of the Symposium on Discrete Algorithms. SIAM, 2013. [6] Benjamin Edelman and Michael Ostrovsky. Strategic bidder behavior in sponsored search auctions. Decision support systems, 43(1):192–198, 2007. [7] Mohammad Taghi Hajiaghayi, Robert Kleinberg, and David C Parkes. Adaptive limited-supply online auctions. In Proceedings of the 5th ACM conference on Electronic commerce, pages 71– 80. ACM, 2004. [8] Brendan Kitts and Benjamin Leblanc. Optimal bidding on keyword auctions. Electronic Markets, 14(3):186–201, 2004. [9] Brendan Kitts, Parameshvyas Laxminarayan, Benjamin Leblanc, and Ryan Meech. A formal analysis of search auctions including predictions on click fraud and bidding tactics. In Workshop on Sponsored Search Auctions, 2005. [10] Robert Kleinberg and Tom Leighton. The value of knowing a demand curve: Bounds on regret for online posted-price auctions. In Symposium on Foundations of Computer Science, pages 594–605. IEEE, 2003. [11] Andres Munoz Medina and Mehryar Mohri. Learning theory and algorithms for revenue optimization in second price auctions with reserve. In Proceedings of The 31st International Conference on Machine Learning, pages 262–270, 2014. [12] Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar. Foundations of Machine Learning. MIT Press, 2012. [13] David C Parkes. Online mechanisms. In Noam Nisan, Tim Roughgarden, Eva Tardos, and Vijay Vazirani, editors, Algorithmic Game Theory. Cambridge University Press, 2007. [14] Alexander Rakhlin, Ohad Shamir, and Karthik Sridharan. Making gradient descent optimal for strongly convex stochastic optimization. arXiv preprint arXiv:1109.5647, 2011. [15] Alexander Rakhlin, Ohad Shamir, and Karthik Sridharan. Making gradient descent optimal for strongly convex stochastic optimization. In Proceedings of the 29th International Conference on Machine Learning (ICML-12), pages 449–456, 2012. [16] Bernhard Sch¨olkopf and Alexander J Smola. Learning with kernels: Support vector machines, regularization, optimization, and beyond. MIT press, 2002. [17] Hal R Varian and Jack Repcheck. Intermediate microeconomics: a modern approach, volume 6. WW Norton & Company New York, NY, 2010. 9
|
2014
|
150
|
5,237
|
Learning with Pseudo-Ensembles Philip Bachman McGill University Montreal, QC, Canada phil.bachman@gmail.com Ouais Alsharif McGill University Montreal, QC, Canada ouais.alsharif@gmail.com Doina Precup McGill University Montreal, QC, Canada dprecup@cs.mcgill.ca Abstract We formalize the notion of a pseudo-ensemble, a (possibly infinite) collection of child models spawned from a parent model by perturbing it according to some noise process. E.g., dropout [9] in a deep neural network trains a pseudo-ensemble of child subnetworks generated by randomly masking nodes in the parent network. We examine the relationship of pseudo-ensembles, which involve perturbation in model-space, to standard ensemble methods and existing notions of robustness, which focus on perturbation in observation-space. We present a novel regularizer based on making the behavior of a pseudo-ensemble robust with respect to the noise process generating it. In the fully-supervised setting, our regularizer matches the performance of dropout. But, unlike dropout, our regularizer naturally extends to the semi-supervised setting, where it produces state-of-the-art results. We provide a case study in which we transform the Recursive Neural Tensor Network of [19] into a pseudo-ensemble, which significantly improves its performance on a real-world sentiment analysis benchmark. 1 Introduction Ensembles of models have long been used as a way to obtain robust performance in the presence of noise. Ensembles typically work by training several classifiers on perturbed input distributions, e.g. bagging randomly elides parts of the distribution for each trained model and boosting re-weights the distribution before training and adding each model to the ensemble. In the last few years, dropout methods have achieved great empirical success in training deep models, by leveraging a noise process that perturbs the model structure itself. However, there has not yet been much analysis relating this approach to classic ensemble methods or other approaches to learning robust models. In this paper, we formalize the notion of a pseudo-ensemble, which is a collection of child models spawned from a parent model by perturbing it with some noise process. Sec. 2 defines pseudoensembles, after which Sec. 3 discusses the relationships between pseudo-ensembles and standard ensemble methods, as well as existing notions of robustness. Once the pseudo-ensemble framework is defined, it can be leveraged to create new algorithms. In Sec. 4, we develop a novel regularizer that minimizes variation in the output of a model when it is subject to noise on its inputs and its internal state (or structure). We also discuss the relationship of this regularizer to standard dropout methods. In Sec. 5 we show that our regularizer can reproduce the performance of dropout in a fullysupervised setting, while also naturally extending to the semi-supervised setting, where it produces state-of-the-art performance on some real-world datasets. Sec. 6 presents a case study in which we extend the Recursive Neural Tensor Network from [19] by converting it into a pseudo-ensemble. We 1 generate the pseudo-ensemble using a noise process based on Gaussian parameter fuzzing and latent subspace sampling, and empirically show that both types of perturbation contribute to significant performance improvements beyond that of the original model. We conclude in Sec. 7. 2 What is a pseudo-ensemble? Consider a data distribution pxy which we want to approximate using a parametric parent model fθ. A pseudo-ensemble is a collection of ξ-perturbed child models fθ(x; ξ), where ξ comes from a noise process pξ. Dropout [9] provides the clearest existing example of a pseudo-ensemble. Dropout samples subnetworks from a source network by randomly masking the activity of subsets of its input/hidden layer nodes. The parameters shared by the subnetworks, through their common source network, are learned to minimize the expected loss of the individual subnetworks. In pseudoensemble terms, the source network is the parent model, each sampled subnetwork is a child model, and the noise process consists of sampling a node mask and using it to extract a subnetwork. The noise process used to generate a pseudo-ensemble can take fairly arbitrary forms. The only requirement is that sampling a noise realization ξ, and then imposing it on the parent model fθ, be computationally tractable. This generality allows deriving a variety of pseudo-ensemble methods from existing models. For example, for a Gaussian Mixture Model, one could perturb the means of the mixture components with, e.g., Gaussian noise and their covariances with, e.g., Wishart noise. The goal of learning with pseudo-ensembles is to produce models robust to perturbation. To formalize this, the general pseudo-ensemble objective for supervised learning can be written as follows1: minimize θ E (x,y)∼pxy E ξ∼pξ L(fθ(x; ξ), y), (1) where (x, y) ∼pxy is an (observation, label) pair drawn from the data distribution, ξ ∼pξ is a noise realization, fθ(x; ξ) represents the output of a child model spawned from the parent model fθ via ξ-perturbation, y is the true label for x, and L(ˆy, y) is the loss for predicting ˆy instead of y. The generality of the pseudo-ensemble approach comes from broad freedom in describing the noise process pξ and the mechanism by which ξ perturbs the parent model fθ. Many useful methods could be developed by exploring novel noise processes for generating perturbations beyond the independent masking noise that has been considered for neural networks and the feature noise that has been considered in the context of linear models. For example, [17] develops a method for learning “ordered representations” by applying dropout/masking noise in a deep autoencoder while enforcing a particular “nested” structure among the random masking variables in ξ, and [2] relies heavily on random perturbations when training Generative Stochastic Networks. 3 Related work Pseudo-ensembles are closely related to traditional ensemble methods as well as to methods for learning models robust to input uncertainty. By optimizing the expected loss of individual ensemble members’ outputs, rather than the expected loss of the joint ensemble output, pseudo-ensembles differ from boosting, which iteratively augments an ensemble to minimize the loss of the joint output [8]. Meanwhile, the child models in a pseudo-ensemble share parameters and structure through their parent model, which will tend to correlate their behavior. This distinguishes pseudo-ensembles from traditional “independent member” ensemble methods, like bagging and random forests, which typically prefer diversity in the behavior of their members, as this provides bias and variance reduction when the outputs of their members are averaged [8]. In fact, the regularizers we introduce in Sec. 4 explicitly minimize diversity in the behavior of their pseudo-ensemble members. The definition and use of pseudo-ensembles are strongly motivated by the intuition that models trained to be robust to noise should generalize better than models that are (overly) sensitive to small perturbations. Previous work on robust learning has overwhelmingly concentrated on perturbations affecting the inputs to a model. For example, the optimization community has produced a large body of theoretical and empirical work addressing “stochastic programming” [18] and “robust optimization” [4]. Stochastic programming seeks to produce a solution to a, e.g., linear program that performs 1It is easy to formulate analogous objectives for unsupervised learning, maximum likelihood, etc. 2 well on average, with respect to a known distribution over perturbations of parameters in the problem definition2. Robust optimization generally seeks to produce a solution to a, e.g., linear program with optimal worst case performance over a given set of possible perturbations of parameters in the problem definition. Several well-known machine learning methods have been shown equivalent to certain robust optimization problems. For example, [24] shows that using Lasso (i.e. ℓ1 regularization) in a linear regression model is equivalent to a robust optimization problem. [25] shows that learning a standard SVM (i.e. hinge loss with ℓ2 regularization in the corresponding RKHS) is also equivalent to a robust optimization problem. Supporting the notion that noise-robustness improves generalization, [25] prove many of the statistical guarantees that make SVMs so appealing directly from properties of their robust optimization equivalents, rather than using more complicated proofs involving, e.g., VC-dimension. Layer i-1 Layer i Layer i+1 (1) (2) (3) (4) Figure 1: How to compute partial noisy output f i θ: (1) compute ξ-perturbed output ˜f i−1 θ of layers < i, (2) compute f i θ from ˜f i−1 θ , (3) ξ-perturb f i θ to get ˜f i θ, (4) repeat up through the layers > i. More closely related to pseudo-ensembles are recent works that consider approaches to learning linear models with inputs perturbed by different sorts of noise. [5] shows how to efficiently learn a linear model that (globally) optimizes expected performance w.r.t. certain types of noise (e.g. Gaussian, zero-masking, Poisson) on its inputs, by marginalizing over the noise. Particularly relevant to our work is [21], which studies dropout (applied to linear models) closely, and shows how its effects are well-approximated by a Tikhonov (i.e. quadratic/ridge) regularization term that can be estimated from both labeled and unlabeled data. The authors of [21] leveraged this label-agnosticism to achieve state-of-the-art performance on several sentiment analysis tasks. While all the work described above considers noise on the input-space, pseudo-ensembles involve noise in the model-space. This can actually be seen as a superset of input-space noise, as a model can always be extended with an initial “identity layer” that copies the noise-free input. Noise on the input-space can then be reproduced by noise on the initial layer, which is now part of the model-space. 4 The Pseudo-Ensemble Agreement regularizer We now present Pseudo-Ensemble Agreement (PEA) regularization, which can be used in a fairly general class of computation graphs. For concreteness, we present it in the case of deep, layered neural networks. PEA regularization operates by controlling distributional properties of the random vectors {f 2 θ (x; ξ), ..., f d θ (x; ξ)}, where f i θ(x; ξ) gives the activities of the ith layer of fθ in response to x when layers < i are perturbed by ξ while layer i is left unperturbed. Fig. 1 illustrates the construction of these random vectors. We will assume that layer d is the output layer, i.e.f d θ (x) gives the output of the unperturbed parent model in response to x and f d θ (x; ξ) = fθ(x; ξ) gives the response of the child model generated by ξ-perturbing fθ. Given the random vectors f i θ(x; ξ), PEA regularization is defined as follows: R(fθ, px, pξ) = E x∼px E ξ∼pξ " d X i=2 λiVi(f i θ(x), f i θ(x; ξ)) # , (2) where fθ is the parent model to regularize, x ∼px is an unlabeled observation, Vi(·, ·) is the “variance” penalty imposed on the distribution of activities in the ith layer of the pseudo-ensemble spawned from fθ, and λi controls the relative importance of Vi. Note that for Eq. 2 to act on the “variance” of the f i θ(x; ξ), we should have f i θ(x) ≈Eξ f i θ(x; ξ). This approximation holds reasonably well for many useful neural network architectures [1, 22]. In our experiments we actually compute the penalties Vi between independently-sampled pairs of child models. We consider several different measures of variance to penalize, which we will introduce as needed. 2Note that “parameters” in a linear program are analogous to inputs in standard machine learning terminology, as they are observed quantities (rather than quantities optimized over). 3 4.1 The effect of PEA regularization on feature co-adaptation One of the original motivations for dropout was that it helps prevent “feature co-adaptation” [9]. That is, dropout encourages individual features (i.e. hidden node activities) to remain helpful, or at least not become harmful, when other features are removed from their local context. We provide some support for that claim by examining the following optimization objective 3: minimize θ E (x,y)∼pxy [L(fθ(x), y)] + E x∼px E ξ∼pξ " d X i=2 λiVi(f i θ(x), f i θ(x; ξ)) # , (3) in which the supervised loss L depends only on the parent model fθ and the pseudo-ensemble only appears in the PEA regularization term. For simplicity, let λi = 0 for i < d, λd = 1, and Vd(v1, v2) = DKL(softmax(v1)|| softmax(v2)), where softmax is the standard softmax and DKL(p1||p2) is the KL-divergence between p1 and p2 (we indicate this penalty by Vk). We use xent(softmax(fθ(x)), y) for the loss L(fθ(x), y), where xent(ˆy, y) is the cross-entropy between the predicted distribution ˆy and the true distribution y. Eq. 3 never explicitly passes label information through a ξ-perturbed network, so ξ only acts through its effects on the distribution of the parent model’s predictions when subjected to ξ-perturbation. In this case, (3) trades off accuracy against feature co-adaptation, as measured by the degree to which the feature activity distribution at layer i is affected by perturbation of the feature activity distributions for layers < i. We test this regularizer empirically in Sec. 5.1. The observed ability of this regularizer to reproduce the performance benefits of standard dropout supports the notion that discouraging “co-adaptation” plays an important role in dropout’s empirical success. Also, by acting strictly to make the output of the parent model more robust to ξ-perturbation, the performance of this regularizer rebuts the claim in [22] that noise-robustness plays only a minor role in the success of standard dropout. 4.2 Relating PEA regularization to standard dropout The authors of [21] show that, assuming a noise process ξ such that Eξ[f(x; ξ)] = f(x), logistic regression under the influence of dropout optimizes the following objective: n X i=1 E ξ [ℓ(fθ(xi; ξ), yi)] = n X i=1 ℓ(fθ(xi), yi)) + R(fθ), (4) where fθ(xi) = θxi, ℓ(fθ(xi), yi) is the logistic regression loss, and the regularization term is: R(fθ) ≡ n X i=1 E ξ [A(fθ(xi; ξ)) −A(fθ(xi))] , (5) where A(·) indicates the log partition function for logistic regression. Using only a KL-d penalty at the output layer, PEA-regularized logistic regression minimizes: n X i=1 ℓ(fθ(xi), yi) + E ξ [DKL (softmax(fθ(xi)) || softmax(fθ(xi; ξ)))] . (6) Defining distribution pθ(x) as softmax(fθ(x)), we can re-write the PEA part of Eq. 6 to get: E ξ [DKL (pθ(x) || pθ(x; ξ))] = E ξ "X c∈C pc θ(x) log pc θ(x) pc θ(x; ξ) # (7) = X c∈C E ξ " pc θ(x) log exp f c θ(x) P c′∈C exp f c′ θ (x; ξ) exp f c θ(x; ξ) P c′∈C exp f c′ θ (x) # (8) = X c∈C E ξ [pc θ(x)(f c θ(x) −f c θ(x; ξ)) + pc θ(x)(A(fθ(x; ξ)) −A(fθ(x)))] (9) = E ξ "X c∈C pc θ(x)(A(fθ(x; ξ)) −A(fθ(x))) # = E ξ [A(fθ(x; ξ)) −A(fθ(x))] (10) which brings us to the regularizer in Eq. 5. 3While dropout is well-supported empirically, its mode-of-action is not well-understood outside the limited context of linear models. 4 4.3 PEA regularization for semi-supervised learning PEA regularization works as-is in a semi-supervised setting, as the penalties Vi do not require label information. We train networks for semi-supervised learning in two ways, both of which apply the objective in Eq. 1 on labeled examples and PEA regularization on the unlabeled examples. The first way applies a tanh-variance penalty Vt and the second way applies a xent-variance penalty Vx, which we define as follows: Vt(¯y, ˜y) = || tanh(¯y) −tanh(˜y)||2 2, Vx(¯y, ˜y) = xent(softmax(¯y), softmax(˜y)), (11) where ¯y and ˜y represent the outputs of a pair of independently sampled child models, and tanh operates element-wise. The xent-variance penalty can be further expanded as: Vx(¯y, ˜y) = DKL(softmax(¯y)|| softmax(˜y)) + ent(softmax(¯y)), (12) where ent(·) denotes the entropy. Thus, Vx combines the KL-divergence penalty with an entropy penalty, which has been shown to perform well in a semi-supervised setting [7, 14]. Recall that at non-output layers we regularize with the “direction” penalty Vc. Before the masking noise, we also apply zero-mean Gaussian noise to the input and to the biases of all nodes. In the experiments, we chose between the two output-layer penalties Vt/Vx based on observed performance. 5 Testing PEA regularization We tested PEA regularization in three scenarios: supervised learning on MNIST digits, semi-supervised learning on MNIST digits, and semi-supervised transfer learning on a dataset from the NIPS 2011 Workshop on Challenges in Learning Hierarchical Models [13]. Full implementations of our methods, written with THEANO [3], and scripts/instructions for reproducing all of the results in this section are available online at: http://github.com/Philip-Bachman/Pseudo-Ensembles. 5.1 Fully-supervised MNIST The MNIST dataset comprises 60k 28x28 grayscale hand-written digit images for training and 10k images for testing. For the supervised tests we used SGD hyperparameters roughly following those in [9]. We trained networks with two hidden layers of 800 nodes each, using rectified-linear activations and an ℓ2-norm constraint of 3.5 on incoming weights for each node. For both standard dropout (SDE) and PEA, we used softmax →xent loss at the output layer. We initialized hidden layer biases to 0.1, output layer biases to 0, and inter-layer weights to zero-mean Gaussian noise with σ = 0.01. We trained all networks for 1000 epochs with no early-stopping (i.e. performance was measured for the final network state). SDE obtained 1.05% error averaged over five random initializations. Using PEA penalty Vk at the output layer and computing classification loss/gradient only for the unperturbed parent network, we obtained 1.08% averaged error. The ξ-perturbation involved node masking but not bias noise. Thus, training the same network as used for dropout while ignoring the effects of masking noise on the classification loss, but encouraging the network to be robust to masking noise (as measured by Vk), matched the performance of dropout. This result supports the equivalence between dropout and this particular form of PEA regularization, which we derived in Section 4.2. 5.2 Semi-supervised MNIST We tested semi-supervised learning on MNIST following the protocol described in [23]. These tests split MNIST’s 60k training samples into labeled/unlabeled subsets, with the labeled sets containing nl ∈{100, 600, 1000, 3000} samples. For labeled sets of size 600, 1000, and 3000, the full training data was randomly split 10 times into labeled/unlabeled sets and results were averaged over the splits. For labeled sets of size 100, we averaged over 50 random splits. The labeled sets had the same number of examples for each class. We tested PEA regularization with and without denoising autoencoder pre-training [20]4. Pre-trained networks were always PEA-regularized with penalty Vx 4See our code for a perfectly complete description of our pre-training. 5 SDE: 600 PEA: 600 PEA+PT: 600 RAW: 600 PEA: 100 (a) (b) Figure 2: Performance of PEA regularization for semi-supervised learning using the MNIST dataset. The top row of filter blocks in (a) were the result of training a fixed network architecture on 600 labeled samples using: weight norm constraints only (RAW), standard dropout (SDE), standard dropout with PEA regularization on unlabeled data (PEA), and PEA preceded by pre-training as a denoising autoencoder [20] (PEA+PT). The bottom filter block in (a) was the result of training with PEA on 100 labeled samples. (b) shows test error over the course of training for RAW/SDE/PEA, averaged over 10 random training sets of size 600/1000. on the output layer and Vc on the hidden layers. Non-pre-trained networks used Vt on the output layer, except when the labeled set was of size 100, for which Vx was used. In the latter case, we gradually increased the λi over the course of training, as suggested by [7]. We generated the pseudoensembles for these tests using masking noise and Gaussian input+bias noise with σ = 0.1. Each network had two hidden layers with 800 nodes. Weight norm constraints and SGD hyperparameters were set as for supervised learning. Table 1 compares the performance of PEA regularization with previous results. Aside from CNN, all methods in the table are “general”, i.e. do not use convolutions or other image-specific techniques to improve performance. The main comparisons of interest are between PEA(+) and other methods for semi-supervised learning with neural networks, i.e. E-NN, MTC+, and PL+. E-NN (EmbedNN from [23]) uses a nearest-neighbors-based graph Laplacian regularizer to make predictions “smooth” with respect to the manifold underlying the data distribution px. MTC+ (the Manifold Tangent Classifier from [16]) regularizes predictions to be smooth with respect to the data manifold by penalizing gradients in a learned approximation of the tangent space of the data manifold. PL+ (the PseudoLabel method from [14]) uses the joint-ensemble predictions on unlabeled data as “pseudo-labels”, and treats them like “true” labels. The classification losses on true labels and pseudo-labels are balanced by a scaling factor which is carefully modulated over the course of training. PEA regularization (without pre-training) outperforms all previous methods in every setting except 100 labeled samples, where PL+ performs better, but with the benefit of pre-training. By adding pretraining (i.e. PEA+), we achieve a two-fold reduction in error when using only 100 labeled samples. TSVM NN CNN E-NN MTC+ PL+ SDE SDE+ PEA PEA+ 100 16.81 25.81 22.98 16.86 12.03 10.49 22.89 13.54 10.79 5.21 600 6.16 11.44 7.68 5.97 5.13 4.01 7.59 5.68 2.44 2.87 1000 5.38 10.70 6.45 5.73 3.64 3.46 5.80 4.71 2.23 2.64 3000 3.45 6.04 3.35 3.59 2.57 2.69 3.60 3.00 1.91 2.30 Table 1: Performance of semi-supervised learning methods on MNIST with varying numbers of labeled samples. From left-to-right the methods are Transductive SVM , neural net, convolutional neural net, EmbedNN [23], Manifold Tangent Classifier [16], Pseudo-Label [14], standard dropout plus fuzzing [9], dropout plus fuzzing with pre-training, PEA, and PEA with pre-training. Methods with a “+” used contractive or denoising autoencoder pre-training [20]. The testing protocol and the results left of MTC+ were presented in [23]. The MTC+ and PL+ results are from their respective papers and the remaining results are our own. We trained SDE(+) using the same network/SGD hyperparameters as for PEA. The only difference was that the former did not regularize for pseudo-ensemble agreement on the unlabeled examples. We measured performance on the standard 10k test samples for MNIST, and all of the 60k training samples not included in a given labeled training set were made available without labels. The best result for each training size is in bold. 5.3 Transfer learning challenge (NIPS 2011) The organizers of the NIPS 2011 Workshop on Challenges in Learning Hierarchical Models [13] proposed a challenge to improve performance on a target domain by using labeled and unlabeled 6 data from two related source domains. The labeled data source was CIFAR-100 [11], which contains 50k 32x32 color images in 100 classes. The unlabeled data source was a collection of 100k 32x32 color images taken from Tiny Images [11]. The target domain comprised 120 32x32 color images divided unevenly among 10 classes. Neither the classes nor the images in the target domain appeared in either of the source domains. The winner of this challenge used convolutional Spike and Slab Sparse Coding, followed by max pooling and a linear SVM on the pooled features [6]. Labels on the source data were ignored and the source data was used to pre-train a large set of convolutional features. After applying the pre-trained feature extractor to the 120 training images, this method achieved an accuracy of 48.6% on the target domain, the best published result on this dataset. We applied semi-supervised PEA regularization by first using the CIFAR-100 data to train a deep network comprising three max-pooled convolutional layers followed by a fully-connected hidden layer which fed into a softmax →xent output layer. Afterwards, we removed the hidden and output layers, replaced them with a pair of fully-connected hidden layers feeding into an ℓ2-hinge-loss output layer5, and then trained the non-convolutional part of the network on the 120 training images from the target domain. For this final training phase, which involved three layers, we tried standard dropout and dropout with PEA regularization on the source data. Standard dropout achieved 55.5% accuracy, which improved to 57.4% when we added PEA regularization on the source data. While most of the improvement over the previous state-of-the-art (i.e. 48.6%) was due to dropout and an improved training strategy (i.e. supervised pre-training vs. unsupervised pre-training), controlling the feature activity and output distributions of the pseudo-ensemble on unlabeled data allowed significant further improvement. 6 Improved sentiment analysis using pseudo-ensembles We now show how the Recursive Neural Tensor Network (RNTN) from [19] can be adapted using pseudo-ensembles, and evaluate it on the Stanford Sentiment Treebank (STB) task. The STB task involves predicting the sentiment of short phrases extracted from movie reviews on RottenTomatoes.com. Ground-truth labels for the phrases, and the “sub-phrases” produced by processing them with a standard parser, were generated using Amazon Mechanical Turk. In addition to pseudoensembles, we used a more “compact” bilinear form in the function f : Rn × Rn →Rn that the RNTN applies recursively as shown in Figure 3. The computation for the ith dimension of the original f (for vi ∈Rn×1) is: fi(v1, v2) = tanh([v1; v2]⊤Ti[v1; v2] + Mi[v1; v2; 1]), whereas we use: fi(v1, v2) = tanh(v⊤ 1 Tiv2 + Mi[v1; v2; 1]), in which Ti indicates a matrix slice of tensor T and Mi indicates a vector row of matrix M. In the original RNTN, T is 2n × 2n × n and in ours it is n × n × n. The other parameters in the RNTNs are a transform matrix M ∈Rn×2n+1 and a classification matrix C ∈Rc×n+1; each RNTN outputs c class probabilities for vector v using softmax(C[v; 1]). A “;” indicates vertical vector stacking. We initialized the model with pre-trained word vectors. The pre-training used word2vec on the training and dev set, with three modifications: dropout/fuzzing was applied during pre-training (to match the conditions in the full model), the vector norms were constrained so the pre-trained vectors had standard deviation 0.5, and tanh was applied during word2vec (again, to match conditions in the full model). All code required for these experiments is publicly available online. We generated pseudo-ensembles from a parent RNTN using two types of perturbation: subspace sampling and weight fuzzing. We performed subspace sampling by keeping only n 2 randomly sampled latent dimensions out of the n in the parent model when processing a given phrase tree. Using the same sampled dimensions for a full phrase tree reduced computation time significantly, as the parameter matrices/tensor could be “sliced” to include only the relevant dimensions6. During 5We found that ℓ2-hinge-loss performed better than softmax →xent in this setting. Switching to softmax →xent degrades the dropout and PEA results but does not change their ranking. 6This allowed us to train significantly larger models before over-fitting offset increased model capacity. But, training these larger models would have been tedious without the parameter slicing permitted by subspace sampling, as feedforward for the RNTN is O(n3). 7 training we sampled a new subspace each time a phrase tree was processed and computed testtime outputs for each phrase tree by averaging over 50 randomly sampled subspaces. We performed weight fuzzing during training by perturbing parameters with zero-mean Gaussian noise before processing each phrase tree and then applying gradients w.r.t. the perturbed parameters to the unperturbed parameters. We did not fuzz during testing. Weight fuzzing has an interesting interpretation as an implicit convolution of the objective function (defined w.r.t. the model parameters) with an isotropic Gaussian distribution. In the case of recursive/recurrent neural networks this may prove quite useful, as convolving the objective with a Gaussian reduces its curvature, thereby mitigating some problems stemming from ill-conditioned Hessians [15]. For further description of the model and training/testing process, see the supplementary material and the code from http://github.com/Philip-Bachman/Pseudo-Ensembles. RNTN PV DCNN CTN CTN+F CTN+S CTN+F+S Fine-grained 45.7 48.7 48.5 43.1 46.1 47.5 48.4 Binary 85.4 87.8 86.8 83.4 85.3 87.8 88.9 Table 2: Fine-grained and binary root-level prediction performance for the Stanford Sentiment Treebank task. RNTN is the original “full” model presented in [19]. CTN is our “compact” tensor network model. +F/S indicates augmenting our base model with weight fuzzing/subspace sampling. PV is the Paragraph Vector model in [12] and DCNN is the Dynamic Convolutional Neural Network model in [10]. r1 p1 w2 w3 w1 p1 = f(w2, w3) r1 = f(w1, p1) perhaps the best table look-up Figure 3: How to feedforward through the Recursive Neural Tensor Network. First, the tree structure is generated by parsing the input sentence. Then, the vector for each node is computed by look-up at the leaves (i.e. words/tokens) and by a tensor-based transform of the node’s children’s vectors otherwise. Following the protocol suggested by [19], we measured root-level (i.e. whole-phrase) prediction accuracy on two tasks: fine-grained sentiment prediction and binary sentiment prediction. The fine-grained task involves predicting classes from 1-5, with 1 indicating strongly negative sentiment and 5 indicating strongly positive sentiment. The binary task is similar, but ignores “neutral” phrases (those in class 3) and considers only whether a phrase is generally negative (classes 1/2) or positive (classes 4/5). Table 2 shows the performance of our compact RNTN in four forms that include none, one, or both of subspace sampling and weight fuzzing. Using only ℓ2 regularization on its parameters, our compact RNTN approached the performance of the full RNTN, roughly matching the performance of the second best method tested in [19]. Adding weight fuzzing improved performance past that of the full RNTN. Adding subspace sampling improved performance further and adding both noise types pushed our RNTN well past the full RNTN, resulting in state-ofthe-art performance on the binary task. 7 Discussion We proposed the notion of a pseudo-ensemble, which captures methods such as dropout [9] and feature noising in linear models [5, 21] that have recently drawn significant attention. Using the conceptual framework provided by pseudo-ensembles, we developed and applied a regularizer that performs well empirically and provides insight into the mechanisms behind dropout’s success. We also showed how pseudo-ensembles can be used to improve the performance of an already powerful model on a competitive real-world sentiment analysis benchmark. We anticipate that this idea, which unifies several rapidly evolving lines of research, can be used to develop several other novel and successful algorithms, especially for semi-supervised learning. References [1] P. Baldi and P. Sadowski. Understanding dropout. In NIPS, 2013. [2] Y. Bengio, ´E. Thibodeau-Laufer, G. Alain, and J. Yosinski. Deep generative stochastic networks trainable by backprop. arXiv:1306.1091v5 [cs.LG], 2014. 8 [3] J. Bergstra, O. Breuleux, F. Bastien, P. Lamblin, R. Pascanu, G. Desjardins, J. Turian, D. Warde-Farley, and Y. Bengio. Theano: A cpu and gpu math expression compiler. In Python for Scientific Computing Conference (SciPy), 2010. [4] D. Bertsimas, D. B. Brown, and C. Caramanis. Theory and applications of robust optimization. SIAM Review, 53(3), 2011. [5] L. Van der Maaten, M. Chen, S. Tyree, and K. Q. Weinberger. Learning with marginalized corrupted features. In ICML, 2013. [6] I. J. Goodfellow, A. Courville, and Y. Bengio. Large-scale feature learning with spike-and-slab sparse coding. In ICML, 2012. [7] Y. Grandvalet and Y. Bengio. Semi-Supervised Learning, chapter Entropy Regularization. MIT Press, 2006. [8] T. Hastie, J. Friedman, and R. Tibshirani. Elements of Statistical Learning II. 2008. [9] G.E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R.R. Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv:1207.0580v1 [cs.NE], 2012. [10] N. Kalchbrenner, E. Grefenstette, and P. Blunsom. A convolutional neural network for modelling sentences. In ACL, 2014. [11] A. Krizhevsky. Learning multiple layers of features from tiny images. Master’s thesis, University of Toronto, 2009. [12] Q. V. Le and T. Mikolov. Distributed representations of sentences and documents. In ICML, 2014. [13] Q. V. Le, M. A. Ranzato, R. R. Salakhutdinov, A. Y. Ng, and J. Tenenbaum. Workshop on challenges in learning hierarchical models: Transfer learning and optimization. In NIPS, 2011. [14] D.-H. Lee. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In ICML, 2013. [15] R. Pacanu, T. Mikolov, and Y. Bengio. On the difficulties of training recurrent neural networks. In ICML, 2013. [16] S. Rifai, Y. Dauphin, P. Vincent, Y. Bengio, and X. Muller. The manifold tangent classifier. In NIPS, 2011. [17] O. Rippel, M. A. Gelbart, and R. P. Adams. Learning ordered representations with nested dropout. In ICML, 2014. [18] A. Shapiro, D. Dentcheva, and A. Ruszczynski. Lectures on Stochastic Programming: Modeling and Theory. Society for Industrial and Applied Mathematics (SIAM), 2009. [19] R. Socher, A. Perelygin, J. Y. Wu, J. Chuang, C. D. Manning, A. Y. Ng, and C. Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP, 2013. [20] P. Vincent, H. Larochelle, and Y. Bengio. Extracting and composing robust features with denoising autoencoders. In ICML, 2008. [21] S. Wager, S. Wang, and P. Liang. Dropout training as adaptive regularization. In NIPS, 2013. [22] D. Warde-Farley, I. J. Goodfellow, A. Courville, and Y. Bengio. An empirical analysis of dropout in piecewise linear networks. In ICLR, 2014. [23] J. Weston, F. Ratle, and R. Collobert. Deep learning via semi-supervised embedding. In ICML, 2008. [24] H. Xu, C. Caramanis, and S. Mannor. Robust regression and lasso. In NIPS, 2009. [25] H. Xu, C. Caramanis, and S. Mannor. Robustness and regularization of support vector machines. JMLR, 10, 2009. 9
|
2014
|
151
|
5,238
|
Top Rank Optimization in Linear Time Nan Li1 Rong Jin2 Zhi-Hua Zhou1 1National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China 2Department of Computer Science and Engineering, Michigan State University, East Lansing, MI 48824 {lin,zhouzh}@lamda.nju.edu.cn rongjin@cse.msu.edu Abstract Bipartite ranking aims to learn a real-valued ranking function that orders positive instances before negative instances. Recent efforts of bipartite ranking are focused on optimizing ranking accuracy at the top of the ranked list. Most existing approaches are either to optimize task specific metrics or to extend the rank loss by emphasizing more on the error associated with the top ranked instances, leading to a high computational cost that is super-linear in the number of training instances. We propose a highly efficient approach, titled TopPush, for optimizing accuracy at the top that has computational complexity linear in the number of training instances. We present a novel analysis that bounds the generalization error for the top ranked instances for the proposed approach. Empirical study shows that the proposed approach is highly competitive to the state-of-the-art approaches and is 10-100 times faster. 1 Introduction Bipartite ranking aims to learn a real-valued ranking function that places positive instances above negative instances. It has attracted much attention because of its applications in several areas such as information retrieval and recommender systems [32, 25]. Many ranking methods have been developed for bipartite ranking, and most of them are essentially based on pairwise ranking. These algorithms reduce the ranking problem into a binary classification problem by treating each positivenegative instance pair as a single object to be classified [16, 12, 5, 39, 38, 33, 1, 3]. Since the number of instance pairs can grow quadratically in the number of training instances, one limitation of these methods is their high computational costs, making them not scalable to large datasets. Considering that for applications such as document retrieval and recommender systems, only the top ranked instances will be examined by users, there has been a growing interest in learning ranking functions that perform especially well at the top of the ranked list [7, 39, 38, 33, 1, 3, 27, 40]. Most of these approaches can be categorized into two groups. The first group maximizes the ranking accuracy at the top of the ranked list by optimizing task specific metrics [17, 21, 23, 40], such as average precision (AP) [42], NDCG [39] and partial AUC [27, 28]. The main limitation of these methods is that they often result in non-convex optimization problems that are difficult to solve efficiently. Structural SVM [37] addresses this issue by translating the non-convexity into an exponential number of constraints. It can still be computationally challenging because it usually requires to search for the most violated constraint at each iteration of optimization. In addition, these methods are statistically inconsistent [36, 21], leading to suboptimal solutions. The second group of methods are based on pairwise ranking. They design special convex loss functions that place more penalties on the ranking errors related to the top ranked instances [38, 33, 1]. Since these methods are based on pairwise ranking, their computational costs are usually proportional to the number of positive-negative instance pairs, making them unattractive for large datasets. 1 In this paper, we address the computational challenge of bipartite ranking by designing a ranking algorithm, named TopPush, that can efficiently optimize the ranking accuracy at the top. The key feature of the proposed TopPush algorithm is that its time complexity is only linear in the number of training instances. This is in contrast to most existing methods for bipartite ranking whose computational costs depend on the number of instance pairs. Moreover, we develop novel analysis for bipartite ranking. One deficiency of the existing theoretical studies [33, 1] on bipartite ranking is that they try to bound the probability for a positive instance to be ranked before any negative instance, leading to relatively pessimistic bounds. We overcome this limitation by bounding the probability of ranking a positive instance before most negative instances, and show that TopPush is effective in placing positive instances at the top of a ranked list. Extensive empirical study shows that TopPush is computationally more efficient than most ranking algorithms, and yields comparable performance as the state-of-the-art approaches that maximize the ranking accuracy at the top. The rest of this paper is organized as follows. Section 2 introduces the preliminaries of bipartite ranking, and addresses the difference between AUC optimization and maximizing accuracy at the top. Section 3 presents the proposed TopPush algorithm and its key theoretical properties. Section 4 summarizes the empirical study, and Section 5 concludes this work with future directions. 2 Bipartite Ranking: AUC vs. Accuracy at the Top Let X = {x ∈Rd : ∥x∥≤1} be the instance space. Let S = S+ ∪S−be a set of training instances, where S+ = {x+ i ∈X}m i=1 and S−= {x− i ∈X}n i=1 include m positive instances and n negative instances independently sampled from distributions P+ and P−, respectively. The goal of bipartite ranking is to learn a ranking function f : X 7→R that is likely to place a positive instance before most negative ones. In the literature, bipartite ranking has found applications in many domains [32, 25], and its theoretical properties have been examined by several studies [2, 6, 20, 26]. AUC is a commonly used evaluation metric for bipartite ranking [15, 9]. By exploring its equivalence to Wilcoxon-Mann-Whitney statistic [15], many ranking algorithms have been developed to optimize AUC by minimizing the ranking loss defined as Lrank(f; S) = 1 mn Xm i=1 Xn j=1 I f(x+ i ) ≤f(x− j ) , (1) where I(·) is the indicator function. Other than a few special loss functions (e.g., exponential and logistic loss) [33, 20], most of these methods need to enumerate all the positive-negative instance pairs, making them unattractive for large datasets. Various methods have been developed to address this computational challenge [43, 13]. Recently, there is a growing interest on optimizing ranking accuracy at the top [7, 3]. Maximizing AUC is not suitable for this goal as indicated by the analysis in [7]. To address this challenge, we propose to maximize the number of positive instances that are ranked before the first negative instance, which is known as positives at the top [33, 1, 3]. We can translate this objective into the minimization of the following loss L(f; S) = 1 m Xm i=1 I f(x+ i ) ≤max 1≤j≤n f(x− j ) . (2) which computes the fraction of positive instances ranked below the top-ranked negative instance. By minimizing the loss in (2), we essentially push negative instances away from the top of the ranked list, leading to more positive ones placed at the top. We note that (2) is fundamentally different from AUC optimization as AUC does not focus on the ranking accuracy at the top. More discussion about the relationship between (1) and (2) can be found in the longer version of the paper [22]. To design practical learning algorithms, we replace the indicator function in (2) with its convex surrogate, leading to the following loss function Lℓ(f; S) = 1 m Xm i=1 ℓ max 1≤j≤n f(x− j ) −f(x+ i ) , (3) where ℓ(·) is a convex loss function that is non-decreasing1 and differentiable. Examples of such loss functions include truncated quadratic loss ℓ(z) = [1 + z]2 +, exponential loss ℓ(z) = ez, or 1In this paper, we let ℓ(z) to be non-decreasing for the simplicity of formulating dual problem. 2 logistic loss ℓ(z) = log(1 + ez). In the discussion below, we restrict ourselves to the truncated quadratic loss, though most of our analysis applies to others. It is easy to verify that the loss Lℓ(f; S) in (3) is equivalent to the loss used in InfinitePush [1] (a special case of P-norm Push [33]), i.e., Lℓ ∞(f; S) = max 1≤j≤n 1 m Xm i=1 ℓ f(x− j ) −f(x+ i ) . (4) The apparent advantage of employing Lℓ(f; S) instead of Lℓ ∞(f; S) is that it only needs to evaluate on m positive-negative instance pairs, whereas the later needs to enumerate all the mn instance pairs. As a result, the number of dual variables induced by Lℓ(f; S) is n + m, linear in the number of training instances, which is significantly smaller than mn, the number of dual variables induced by Lℓ ∞(f; S) [1, 31]. It is this difference that makes the proposed algorithm achieve a computational complexity linear in the number of training instances and therefore be more efficiently than the existing algorithms for most state-of-the-art algorithms for bipartite ranking. 3 TopPush for Optimizing Top Accuracy We first present a learning algorithm to minimize the loss function in (3), and then the computational complexity and performance guarantee for the proposed algorithm. 3.1 Dual Formulation We consider linear ranking function2, i.e., f(x) = w⊤x, where w ∈Rd is the weight vector to be learned. As a result, the learning problem is given by the following optimization problem min w λ 2 ∥w∥2 + 1 m Xm i=1 ℓ max 1≤j≤n w⊤x− j −w⊤x+ i , (5) where λ > 0 is a regularization parameter. Directly minimizing the objective in (5) can be challenging because of the max operator in the loss function. We address this challenge by developing a dual formulation for (5). Specifically, given a convex and differentiable function ℓ(z), we can rewrite it in its convex conjugate form as ℓ(z) = maxα∈Ωαz −ℓ∗(α) , where ℓ∗(α) is the convex conjugate of ℓ(z) and Ωis the domain of dual variable [4]. For example, the convex conjugate of truncated quadratic loss is ℓ∗(α) = −α + α2/4 with Ω= R+. We note that dual form has been widely used to improve computational efficiency [35] and connect different styles of learning algorithms [19]. Here we exploit it to overcome the difficulty caused by max operator. The dual form of (5) is given in the following theorem, whose detailed proof can be found in the longer version [22]. Theorem 1. Define X+ = (x+ 1 , . . . , x+ m)⊤and X−= (x− 1 , . . . , x− n )⊤, the dual problem of (5) is min (α,β)∈Ξ g(α, β) = 1 2λm∥α⊤X+ −β⊤X−∥2 + Xm i=1 ℓ∗(αi) (6) where α and β are dual variables, and the domain Ξ is defined as Ξ = α ∈Rm +, β ∈Rn + : 1⊤ mα = 1⊤ n β . Let α∗and β∗be the optimal solution to the dual problem (6). Then, the optimal solution w∗to the primal problem in (5) is given by w∗= 1 λm a∗⊤X+ −β∗⊤X− . (7) Remark The key feature of the dual problem in (6) is that the number of dual variables is m + n, leading to a linear time ranking algorithm. This is in contrast to the InfinitPush algorithm in [1] that introduces mn dual variables and a higher computational cost. In addition, the objective function in (6) is smooth if the convex conjugate ℓ∗(·) is smooth, which is true for many common loss functions (e.g., truncated quadratic loss and logistic loss). It is well known in the literature of optimization [4] that an O(1/T 2) convergence rate can be achieved if the objective function is smooth, where T is the number of iterations; this also helps in designing efficient learning algorithm. 2Nonlinear function can be trained by kernel methods, and Nystr¨om method and random Fourier features can transform the kernelized problem into a linear one. See [41] for more discussions. 3 3.2 Linear Time Bipartite Ranking According to Theorem 1, to learn a ranking function f(w), it is sufficient to learn the dual variables α and β by solving the problem in (6). For this purpose, we adopt the accelerated gradient method due to its light computation per iteration, and refer the obtained algorithm as TopPush. Specifically, we choose the Nesterov’s method [30, 29] that achieves an optimal convergence rate O(1/T 2) for smooth objective function. One of the key features of the Nesterov’s method is that it maintains two sequences of solutions: {(αk, βk)} and {(sα k; sβ k)}, where the sequence of auxiliary solutions {(sα k; sβ k)} is introduced to exploit the smoothness of the objective to achieve a faster convergence rate. Algorithm 1 shows the key steps3 of the Nesterov’s method for solving the problem in (6), where the gradients of the objective function g(α, β) can be efficiently computed as ∇αg(α, β) = X+ν⊤/λm + ℓ′ ∗(α) , ∇βg(α, β) = −X−ν⊤/λm . (8) where ν = α⊤X+ −β⊤X−and ℓ′ ∗(·) is the derivative of ℓ∗(·). Algorithm 1 The TopPush Algorithm Input: X+ ∈Rm×d, X−∈Rn×d, λ, ϵ Output: w 1: initialize α1 = α0 = 0m, β1 = β0 = 0n, and let t−1 = 0, t0 = 1, L0 = 1 m+n 2: repeat for k = 1, 2, . . . 3: compute sa k = αk + ωk(αk −αk−1) and sβ k = βk + ωk(βk −βk−1), where ωk = tk−2−1 tk−1 4: compute gα = ∇αg(sα k , sβ k) and gβ = ∇βg(sα k , sβ k) based on (8) 5: find Lk > Lk−1 such that g(αk+1, βk+1) > g(sα k , sβ k) + (∥gα∥2 + ∥gβ∥2)/(2Lk), where [αk+1; βk+1] = πΞ([α′ k+1; β′ k+1]) with α′ k+1 = sα k − 1 Lk gα and β′ k+1 = sβ k − 1 Lk gβ 6: update tk = (1 + q 1 + 4t2 k−1)/2 7: until convergence (i.e., |g(αk+1, βk+1) −g(αk, βk)| < ϵ) 8: return w = 1 λ·m(α⊤ k X+ −β⊤ k X−) It should be noted that, (6) is a constrained problem, and therefore, at each step of gradient mapping, we have to project the dual solution into the domain Ξ (i.e, [αk+1; βk+1] = πΞ([α′ k+1; β′ k+1]) in step 5) to keep them feasible. Below, we discuss how to solve this projection step efficiently. Projection Step For clear notations, we expand the projection step into the problem min α≥0,β≥0 1 2∥α −α0∥2 + 1 2∥β −β0∥2 s.t. 1⊤ mα = 1⊤ n β , (9) where α0 and β0 are the solutions obtained in the last iteration. We note that similar projection problems have been studied in [34, 24] where they either have O((m + n) log(m + n)) time complexity [34] or only provide approximate solutions [24]. Instead, based on the following proposition, we provide a method which find the exact solution to (9) in O(n+m) time. By using proof technique similar to that for Theorem 2 in [24], we can prove the following proposition: Proposition 1. The optimal solution to the projection problem in (9) is given by α∗= [α0 −γ∗]+ and β∗= [β0 + γ∗]+ , where γ∗is the root of function ρ(γ) = Pm i=1[α0 i −γ]+ −Pn j=1[β0 j + γ]+ . Based on Proposition 1, we provide a method which find the exact solution to (9) in O(m+n) time. According to Proposition 1, the key to solving this problem is to find the root of ρ(γ). Instead of approximating the solution via bisection as in [24], we develop a divide-and-conquer method to find the exact solution of γ∗in O(m + n) time, where a similar approach has been used in [10]. The basic idea is to first identify the smallest interval that contains the root based on a modification of the randomized median finding algorithm [8], and then solve the root exactly based on the interval. The detailed projection procedure can be found in the longer version [22]. 3The step size of the Nesterov’s method depends on the smoothness of the objective function. In current work we adopt the Nemirovski’s line search scheme [29] to compute the smoothness parameter, and the detailed algorithm can be found in [22]. 4 Table 1: Comparison of computational complexities for ranking algorithms, where d is the number of dimensions, ϵ is the precision parameter, m and n are the number of positive and negative instances, respectively. Algorithm Computational Complexity SVMRank [18] O (m + n)d + (m + n) log(m + n) /ϵ SVMMAP [42] O (m + n)d + (m + n) log(m + n) /ϵ OWPC [38] O (m + n)d + (m + n) log(m + n) /ϵ SVMpAUC [27, 28] O n log n + m log m + (m + n)d /ϵ InfinitePush [1] O mnd + mn log(mn) /ϵ2 L1SVIP [31] O mnd + mn log(mn) /ϵ TopPush this paper O (m + n)d/√ϵ 3.3 Convergence and Computational Complexity The theorem below states the convergence of the TopPush algorithm, which follows immediately from the convergence result for the Nesterov’s method [29]. Theorem 2. Let αT and βT be the solution output from TopPush after T iterations, we have g(αT , βT ) ≤ min (α,β)∈Ξ g(α, β) + ϵ provided T ≥O(1/√ϵ). Finally, since the computational cost of each iteration is dominated by the gradient evaluation and the projection step, the time complexity of each iteration is O((m + n)d) since the complexity of projection step is O(m+n) and the cost of computing the gradient is O((m+n)d). Combining this result with Theorem 2, we have, to find an ϵ-suboptimal solution, the total computational complexity of the TopPush algorithm is O((m + n)d/√ϵ), which is linear in the number of training instances. Table 1 compares the computational complexity of TopPush with that of the state-of-the-art algorithms. It is easy to see that TopPush is asymptotically more efficient than the state-of-the-art ranking algorithms4. For instances, it is much more efficient than InfinitePush and its sparse extension L1SVIP whose complexity depends on the number of positive-negative instance pairs; compared with SVMRank, SVMMAP and SVMpAUC that handle specific performance metrics via structuralSVM, the linear dependence on the number of training instances makes our TopPush approach more appealing, especially for large datasets. 3.4 Theoretical Guarantee We develop theoretical guarantee for the ranking performance of TopPush. In [33, 1], the authors have developed margin-based generalization bounds for the loss function Lℓ ∞. One limitation with the analysis in [33, 1] is that they try to bound the probability for a positive instance to be ranked before any negative instance, leading to relatively pessimistic bounds5. Our analysis avoids this pitfall by considering the probability of ranking a positive instance before most negative instances. To this end, we first define hb(x, w), the probability for any negative instance to be ranked above x using ranking function f(x) = w⊤x, as hb(x, w) = Ex−∼P− I(w⊤x ≤w⊤x−) . Since we are interested in whether positive instances are ranked above most negative instances, we will measure the quality of f(x) = w⊤x by the probability for any positive instance to be ranked below δ percent of negative instances, i.e., Pb(w, δ) = Prx+∼P+ hb(x+ i , w) ≥δ . Clearly, if a ranking function achieves a high ranking accuracy at the top, it should have a large percentage of positive instances with ranking scores higher than most of the negative instances, leading to a small value for Pb(w, δ) with little δ. The following theorem bounds Pb(w, δ) for TopPush, and the detailed proof can be found in the longer version [22]. 4In Table 1, we report the complexity of SVMpAUC tight in [28], which is more efficient than SVMpAUC in [27]. In addition, SVMpAUC tight is used in experiments and we do not distinguish between them in this paper. 5For instance, for the bounds in [33], the failure probability can be as large as 1 if the parameter p is large. 5 Theorem 3. Given training data S consisting of m independent samples from P+ and n independent samples from P−, let w∗be the optimal solution to the problem in (5). Assume m ≥12 and n ≫t, we have, with a probability at least 1 −2e−t, Pb(w∗, δ) ≤Lℓ(w∗, S) + O p (t + log m)/m where δ = O( p log m/n) and Lℓ(w∗, S) = 1 m Pm i=1 ℓ(max1≤j≤n w⊤x− j −w⊤x+ i ). Remark Theorem 3 implies that if the empirical loss Lℓ(w∗, S) ≤O(log m/m), for most positive instance x+ (i.e., 1 −O(log m/m)), the percentage of negative instances ranked above x+ is upper bounded by O( p log m/n). We observe that m and n play different roles in the bound; that is, because the empirical loss compares the positive instances to the negative instance with the largest score, it usually grows significantly slower with increasing n. For instance, the largest absolute value of Gaussian random samples grows in log n. Thus, we believe that the main effect of increasing n in our bound is to reduce δ (decrease at the rate of 1/√n), especially when n is large. Meanwhile, by increasing the number of positive instances m, we will reduce the bound for Pb(w, δ), and consequently increase the chance of finding positive instances at the top. 4 Experiments 4.1 Settings To evaluate the performance of the TopPush algorithm, we conduct a set of experiments on realworld datasets. Table 2 (left column) summarizes the datasets used in our experiments. Some of them were used in previous studies [1, 31, 3], and others are larger datasets from different domains. We compare TopPush with state-of-the-art algorithms that focus on accuracy at the top, including SVMMAP [42], SVMpAUC [28] with α = 0 and β = 1/n, AATP [3] and InfinitePush [1]. In addition, for completeness, several state-of-the-art classification and ranking models are included in the comparison: logistic regression (LR) for binary classification, cost-sensitive SVM (cs-SVM) that addresses imbalance class distribution by introducing a different misclassification cost for each class, and SVMRank [18] for AUC optimization. We implement TopPush and InfinitePush using MATLAB, implement AATP using CVX [14], and use LIBLINEAR [11] for LR and cs-SVM, and use the codes shared by the authors of the original works. We measure the accuracy at the top by commonly used metrics6: (i) positives at the top (Pos@Top) [1, 31, 3], which is defined as the fraction of positive instances ranked above the topranked negative, (ii) average precision (AP) and (iii) normalized DCG scores (NDCG). On each dataset, experiments are run for thirty trials. In each trial, the dataset is randomly divided into two subsets: 2/3 for training and 1/3 for test. For all algorithms, we set the precision parameter ϵ to 10−4, choose other parameters by 5-fold cross validation (based on the average value of Pos@Top) on training set, and perform the evaluation on test set. Finally, averaged results over thirty trails are reported. All experiments are run on a machine with two Intel Xeon E7 CPUs and 16GB memory. 4.2 Results In table 2, we report the performance of the algorithms in comparison, where the statistics of testbeds are included in the first column of the table. For better comparison between the performance of TopPush and baselines, pairwise t-tests at significance level of 0.9 are performed and results are marks “• / ◦” in table 2 when TopPush is statistically significantly better/worse. When an evaluation task can not be completed in two weeks, it will be stopped automatically, and no result will be reported. As a consequence, we observe that results for some algorithms are missing in Table 2 for certain datasets, especially for large ones. We can see from Table 2 that TopPush, LR and cs-SVM succeed to finish the evaluation on all datasets (even the largest datasets url). In contrast, SVMRank, SVMRank and SVMpAUC fail to complete the training in time for several large datasets. InfinitePush and AATP have the worst scalability: they are only able to finish the smallest dataset diabetes. We thus conclude that overall, TopPush scales well to large datasets. 6It is worth mentioning that we also measure the ranking performance by AUC, and the results can be found in [22]. In addition, more details of the experimental setting can be found there. 6 Table 2: Data statistics (left column) and experimental results. For each dataset, the number of positive and negative instances is below the data name as m/n, together with dimensionality d. For training time comparison,“▲” (“⋆”) are marked if TopPush is at least 10 (100) times faster than the compared algorithm. For performance (mean±std) comparison, “•” (“◦”) is marked if TopPush performs significantly better (worse) than the baseline based on pairwise t-test at 0.9 significance level. On each dataset, if the evaluation of an algorithm can not be completed in two weeks, it will be stopped and its results will be missing from the table. Data Algorithm Time (s) Pos@Top AP NDCG diabetes TopPush 5.11 × 10−3 .123 ± .056 .872 ± .023 .976 ± .005 500/268 LR 2.30 × 10−2 .064 ± .075• .881 ± .022 .973 ± .008 d : 34 cs-SVM 7.70 × 10−2 .077 ± .088• .758 ± .166• .920 ± .078• SVMRank 6.11 × 10−2 .087 ± .082• .879 ± .022 .975 ± .006 SVMMAP 4.71 × 100 .077 ± .072• .879 ± .012 .969 ± .009 SVMpAUC 2.09 × 10−1▲ .053 ± .096• .668 ± .123• .884 ± .065• InfinitePush 2.63 × 101⋆ .119 ± .051 .877 ± .035 .978 ± .007 AATP 2.72 × 103⋆ .127 ± .061 .881 ± .035 .979 ± .010 news20-forsale TopPush 2.16 × 100 .191 ± .088 .843 ± .018 .970 ± .005 999/18, 929 LR 4.14 × 100 .086 ± .067• .803 ± .020• .962 ± .005 d : 62, 061 cs-SVM 1.89 × 100 .114 ± .069• .766 ± .021• .955 ± .006• SVMRank 2.96 × 102⋆ .149 ± .056• .850 ± .016 .972 ± .003 SVMMAP 8.42 × 102⋆ .184 ± .092 .832 ± .022 .969 ± .007 SVMpAUC 3.25 × 102⋆ .196 ± .087 .812 ± .019• .963 ± .005• nslkdd TopPush 7.64 × 101 .633 ± .088 .978 ± .001 .997 ± .001 71, 463/77, 054 LR 3.63 × 101 .220 ± .053• .981 ± .002 .998 ± .001 d : 121 cs-SVM 1.86 × 100 .556 ± .037• .980 ± .001 .998 ± .001 SVMpAUC 1.72 × 102 .634 ± .059 .956 ± .002• .996 ± .001 real-sim TopPush 1.34 × 101 .186 ± .049 .986 ± .001 .998 ± .001 22, 238/50, 071 LR 7.67 × 100 .100 ± .043• .989 ± .001 .999 ± .001 d : 20, 958 cs-SVM 4.84 × 100 .146 ± .031• .979 ± .001 .998 ± .001 SVMRank 1.83 × 103⋆ .090 ± .045• .986 ± .000 .999 ± .001 spambase TopPush 1.51 × 10−1 .129 ± .077 .922 ± .006 .988 ± .001 1, 813/2, 788 LR 3.11 × 10−2 .071 ± .053• .920 ± .010 .987 ± .003 d : 57 cs-SVM 8.31 × 10−2 .069 ± .059• .907 ± .010• .980 ± .004• SVMRank 2.31 × 101▲ .069 ± .076• .931 ± .010 .990 ± .003 SVMMAP 1.92 × 102⋆ .097 ± .069• .935 ± .014 .984 ± .005 SVMpAUC 1.73 × 100▲ .073 ± .058• .854 ± .024• .975 ± .007• InfinitePush 1.78 × 103⋆ .132 ± .087 .920 ± .005 .987 ± .002 url TopPush 5.11 × 103 .474 ± .046 .986 ± .001 .999 ± .001 792, 145/1, 603, 985 LR 8.98 × 103 .362 ± .113• .993 ± .001◦ .999 ± .001 d : 3, 231, 961 cs-SVM 3.78 × 103 .432 ± .069• .991 ± .002 .998 ± .001 w8a TopPush 7.35 × 100 .226 ± .053 .710 ± .019 .938 ± .005 1, 933/62, 767 LR 2.46 × 100 .107 ± .093• .450 ± .374• .775 ± .221• d : 300 cs-SVM 3.87 × 100 .118 ± .105• .447 ± .372• .774 ± .220• SVMpAUC 2.59 × 103⋆ .207 ± .046 .673 ± .021• .929 ± .006• Performance Comparison In terms of evaluation metric Pos@Top, we find that TopPush yields similar performance as InfinitePush and AATP, and performs significantly better than the other baselines including LR and cs-SVM, SVMRank, SVMRank and SVMpAUC. This is consistent with the design of TopPush that aims to maximize the accuracy at the top of the ranked list. Since the loss function optimized by InfinitePush and AATP are similar as that for TopPush, it is not surprising that they yield similar performance. The key advantage of using the proposed algorithm versus InfinitePush and AATP is that it is computationally more efficient and scales well to large datasets. In terms of AP and NDCG, we observe that TopPush yield similar, if not better, performance as the state-of-the-art methods, such as SVMMAP and SVMpAUC, that are designed to optimize these metrics. Overall, we conclude that the proposed algorithm is effective in optimizing the ranking accuracy for the top ranked instances. Training Efficiency To evaluate the computational efficiency, we set the parameters of different algorithms to be the values that are selected by cross-validation, and run these algorithms on full datasets that include both training and testing sets. Table 2 summarizes the training time of different algorithms. From the results, we can see that TopPush is faster than state-of-the-art ranking methods on most datasets. In fact, the training time of TopPush is similar to that of LR and cs-SVM 7 implemented by LIBLINEAR. Since the time complexity of learning a binary classification model is usually linear in the number of training instances, this result implicitly suggests a linear time complexity for the proposed algorithm. 10 2 10 3 10 4 10 5 10 1 10 2 data size trainign time (s) url =100 =10 =1 =0.1 =0.01 (x) Figure 1: Training time of TopPush versus training data size for different values of λ. Scalability We study how TopPush scales to different number of training examples by using the largest dataset url. Figure 1 shows the log-log plot for the training time of TopPush vs. the size of training data, where different lines correspond to different values of λ. For the purpose of comparison, we also include a black dash-dot line that tries to fit the training time by a linear function in the number of training instances (i.e., Θ(m + n)). From the plot, we can see that for different regularization parameter λ, the training time of TopPush increases even slower than the number of training data. This is consistent with our theoretical analysis given in Section 3.3. 5 Conclusion In this paper, we focus on bipartite ranking algorithms that optimize accuracy at the top of the ranked list. To this end, we consider to maximize the number of positive instances that are ranked above any negative instances, and develop an efficient algorithm, named as TopPush to solve related optimization problem. Compared with existing work on this topic, the proposed TopPush algorithm scales linearly in the number of training instances, which is in contrast to most existing algorithms for bipartite ranking whose time complexities dependents on the number of positive-negative instance pairs. Moreover, our theoretical analysis clearly shows that it will lead to a ranking function that places many positive instances the top of the ranked list. Empirical studies verify the theoretical claims: the TopPush algorithm is effective in maximizing the accuracy at the top and is significantly more efficient than the state-of-the-art algorithms for bipartite ranking. In the future, we plan to develop appropriate univariate loss, instead of pairwise ranking loss, for efficient bipartite ranking that maximize accuracy at the top. Acknowledgement This research was supported by the 973 Program (2014CB340501), NSFC (61333014), NSF (IIS-1251031), and ONR Award (N000141210431). References [1] S. Agarwal. The infinite push: A new support vector ranking algorithm that directly optimizes accuracy at the absolute top of the list. In SDM, pages 839–850, 2011. [2] S. Agarwal, T. Graepel, R. Herbrich, S. Har-Peled, and D. Roth. Generalization bounds for the area under the ROC curve. JMLR, 6:393–425, 2005. [3] S. Boyd, C. Cortes, M. Mohri, and A. Radovanovic. Accuracy at the top. In NIPS, pages 962–970. 2012. [4] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004. [5] C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender. Learning to rank using gradient descent. In ICML, pages 89–96, 2005. [6] S. Cl´emenc¸on, G. Lugosi, and N. Vayatis. Ranking and empirical minimization of U-statistics. Annals of Statistics, 36(2):844–874, 2008. [7] S. Cl´emenc¸on and N. Vayatis. Ranking the best instances. JMLR, 8:2671–2699, 2007. [8] T. Cormen, C. Leiserson, R. Rivest, and C. Stein. Introduction to algorithms. MIT Press, 2001. [9] C. Cortes and M. Mohri. AUC optimization vs. error rate minimization. In NIPS, pages 313–320. 2004. [10] J. Duchi, S. Shalev-Shwartz, Y. Singer, and T. Chandra. Efficient projections onto the ℓ1-ball for learning in high dimensions. In ICML, pages 272–279, 2008. [11] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. LIBLINEAR: A library for large linear classification. JMLR, 9:1871–1874, 2008. [12] Y. Freund, R. Iyer, R. Schapire, and Y. Singer. An efficient boosting algorithm for combining preferences. JMLR, 4:933–969, 2003. [13] W. Gao, R. Jin, S. Zhu, and Z.-H. Zhou. One-pass AUC optimization. In ICML, pages 906–914, 2013. 8 [14] M. Grant and S. Boyd. CVX: Matlab software for disciplined convex programming, version 2.1. http: //cvxr.com/cvx, March 2014. [15] J. Hanley and B. McNeil. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology, 143:29–36, 1982. [16] R. Herbrich, T. Graepel, and K. Obermayer. Large Margin Rank Boundaries for Ordinal Regression, chapter Advances in Large Margin Classifiers, pages 115–132. MIT Press, Cambridge, MA, 2000. [17] T. Joachims. A support vector method for multivariate performance measures. In ICML, pages 377–384, Bonn, Germany, 2005. [18] T. Joachims. Training linear SVMs in linear time. In KDD, pages 217–226, 2006. [19] T. Kanamori, A. Takeda, and T. Suzuki. Conjugate relation between loss functions and uncertainty sets in classification problems. JMLR, 14:1461–1504, 2013. [20] W. Kotlowski, K. Dembczynski, and E. H¨ullermeier. Bipartite ranking through minimization of univariate loss. In ICML, pages 1113–1120, 2011. [21] Q.V. Le and A. Smola. Direct optimization of ranking measures. CoRR, abs/0704.3359, 2007. [22] N. Li, R. Jin, and Z.-H. Zhou. Top rank optimization in linear time. CoRR, abs/1410.1462, 2014. [23] N. Li, I. W. Tsang, and Z.-H. Zhou. Efficient optimization of performance measures by classifier adaptation. IEEE-PAMI, 35(6):1370–1382, 2013. [24] J. Liu and J. Ye. Efficient Euclidean projections in linear time. In ICML, pages 657–664, 2009. [25] T.-Y. Liu. Learning to Rank for Information Retrieval. Springer, 2011. [26] H. Narasimhan and S. Agarwal. On the relationship between binary classification, bipartite ranking, and binary class probability estimation. In NIPS, pages 2913–2921. 2013. [27] H. Narasimhan and S. Agarwal. A structural SVM based approach for optimizing partial AUC. In ICML, pages 516–524, 2013. [28] H. Narasimhan and S. Agarwal. SVMtight pAUC: A new support vector method for optimizing partial AUC based on a tight convex upper bound. In KDD, pages 167–175, 2013. [29] A. Nemirovski. Efficient methods in convex programming. Lecture Notes, 1994. [30] Y. Nesterov. Introductory Lectures on Convex Optimization. Kluwer Academic Publishers, 2003. [31] A. Rakotomamonjy. Sparse support vector infinite push. In ICML, 2012. [32] S. Rendle, L. Balby Marinho, A. Nanopoulos, and L. Schmidt-Thieme. Learning optimal ranking with tensor factorization for tag recommendation. In KDD, pages 727–736, 2009. [33] C. Rudin and R. Schapire. Margin-based ranking and an equivalence between adaboost and rankboost. JMLR, 10:2193–2232, 2009. [34] S. Shalev-Shwartz and Y. Singer. Efficient learning of label ranking by soft projections onto polyhedra. JMLR, 7:1567–1599, 2006. [35] S. Sun and J. Shawe-Taylor. Sparse semi-supervised learning using conjugate functions. JMLR, 11:2423– 2455, 2010. [36] A. Tewari and P. Bartlett. On the consistency of multiclass classification methods. JMLR, 8:1007–1025, 2007. [37] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured and interdependent output variables. JMLR, 6:1453–1484, 2005. [38] N. Usunier, D. Buffoni, and P. Gallinari. Ranking with ordered weighted pairwise classification. In ICML, pages 1057–1064, Montreal, Canada, 2009. [39] H. Valizadegan, R. Jin, R. Zhang, and J. Mao. Learning to rank by optimizing NDCG measure. In NIPS, pages 1883–1891. 2009. [40] M. Xu, Y.-F. Li, and Z.-H. Zhou. Multi-label learning with PRO loss. In AAAI, pages 998–1004, 2013. [41] T. Yang, Y.-F. Li, M. Mahdavi, R. Jin, and Z.-H. Zhou. Nystr¨om method vs random Fourier features: A theoretical and empirical comparison. In NIPS, pages 485–493. MIT Press, 2012. [42] Y. Yue, T. Finley, F. Radlinski, and T. Joachims. A support vector method for optimizing average precision. In SIGIR, pages 271–278, 2007. [43] P. Zhao, S.C.H. Hoi, R. Jin, and T. Yang. Online AUC maximization. In ICML, pages 233–240, Bellevue, WA, 2011. 9
|
2014
|
152
|
5,239
|
Learning Neural Network Policies with Guided Policy Search under Unknown Dynamics Sergey Levine and Pieter Abbeel Department of Electrical Engineering and Computer Science University of California, Berkeley Berkeley, CA 94709 {svlevine, pabbeel}@eecs.berkeley.edu Abstract We present a policy search method that uses iteratively refitted local linear models to optimize trajectory distributions for large, continuous problems. These trajectory distributions can be used within the framework of guided policy search to learn policies with an arbitrary parameterization. Our method fits time-varying linear dynamics models to speed up learning, but does not rely on learning a global model, which can be difficult when the dynamics are complex and discontinuous. We show that this hybrid approach requires many fewer samples than model-free methods, and can handle complex, nonsmooth dynamics that can pose a challenge for model-based techniques. We present experiments showing that our method can be used to learn complex neural network policies that successfully execute simulated robotic manipulation tasks in partially observed environments with numerous contact discontinuities and underactuation. 1 Introduction Policy search methods can be divided into model-based algorithms, which use a model of the system dynamics, and model-free techniques, which rely only on real-world experience without learning a model [10]. Although model-free methods avoid the need to model system dynamics, they typically require policies with carefully designed, low-dimensional parameterizations [4]. On the other hand, model-based methods require the ability to learn an accurate model of the dynamics, which can be very difficult for complex systems, especially when the algorithm imposes restrictions on the dynamics representation to make the policy search efficient and numerically stable [5]. In this paper, we present a hybrid method that fits local, time-varying linear dynamics models, which are not accurate enough for standard model-based policy search. However, we can use these local linear models to efficiently optimize a time-varying linear-Gaussian controller, which induces an approximately Gaussian distribution over trajectories. The key to this procedure is to restrict the change in the trajectory distribution at each iteration, so that the time-varying linear model remains valid under the new distribution. Since the trajectory distribution is approximately Gaussian, this can be done efficiently, in terms of both sample count and computation time. To then learn general parameterized policies, we combine this trajectory optimization method with guided policy search. Guided policy search optimizes policies by using trajectory optimization in an iterative fashion, with the policy optimized to match the trajectory, and the trajectory optimized to minimize cost and match the policy. Previous guided policy search methods used model-based trajectory optimization algorithms that required known, differentiable system dynamics [12, 13, 14]. Using our algorithm, guided policy search can be performed under unknown dynamics. This hybrid guided policy search method has several appealing properties. First, the parameterized policy never needs to be executed on the real system – all system interaction during training is done 1 using the time-varying linear-Gaussian controllers. Stabilizing linear-Gaussian controllers is easier than stabilizing arbitrary policies, and this property can be a notable safety benefit when the initial parameterized policy is unstable. Second, although our algorithm relies on fitting a time-varying linear dynamics model, we show that it can handle contact-rich tasks where the true dynamics are not only nonlinear, but even discontinuous. This is because the learned linear models average the dynamics from both sides of a discontinuity in proportion to how often each side is visited, unlike standard linearization methods that differentiate the dynamics. This effectively transforms a discontinuous deterministic problem into a smooth stochastic one. Third, our algorithm can learn policies for partially observed tasks by training a parameterized policy that is only allowed to observe some parts of the state space, using a fully observed formulation for the trajectory optimizer. This corresponds to full state observation during training (for example in an instrumented environment), but only partial observation at test time, making policy search for partially observed tasks significantly easier. In our evaluation, we demonstrate this capability by training a policy for inserting a peg into hole when the precise position of the hole is unknown at test time. The learned policy, represented by a neural network, acquires a strategy that searches for and finds the hole regardless of its position. The main contribution of our work is an algorithm for optimizing trajectories under unknown dynamics. We show that this algorithm outperforms prior methods in terms of both sample complexity and the quality of the learned trajectories. We also show that our method can be integrated with guided policy search, which previously required known models, to learn policies with an arbitrary parameterization, and again demonstrate that the resulting policy search method outperforms prior methods that optimize the parameterized policy directly. Our experimental evaluation includes simulated peg-in-hole insertion, high-dimensional octopus arm control, swimming, and bipedal walking. 2 Preliminaries Policy search consists of optimizing the parameters θ of a policy πθ(ut|xt), which is a distribution over actions ut conditioned on states xt, with respect to the expectation of a cost ℓ(xt, ut), denoted Eπθ[PT t=1 ℓ(xt, ut)]. The expectation is under the policy and the dynamics p(xt+1|xt, ut), which together form a distribution over trajectories τ. We will use Eπθ[ℓ(τ)] to denote the expected cost. Our algorithm optimizes a time-varying linear-Gaussian policy p(ut|xt) = N(Ktxt + kt, Ct), which allows for a particularly efficient optimization method when the initial state distribution is narrow and approximately Gaussian. Arbitrary parameterized policies πθ are optimized using the guided policy search technique, in which πθ is trained to match one or more Gaussian policies p. In this way, we can learn a policy that succeeds from many initial states by training a single stationary, nonlinear policy πθ, which might be represented (for example) by a neural network, from multiple Gaussian policies. As we show in Section 5, this approach can outperform methods that search for the policy parameters θ directly, by taking advantage of the linear-Gaussian structure of p to accelerate learning. For clarity, we will refer to p as a trajectory distribution since, for a narrow Ct and well-behaved dynamics, it induces an approximately Gaussian distribution over trajectories, while the term “policy” will be reserved for the parameterized policy πθ. Time-varying linear-Gaussian policies have previously been used in a number of model-based and model-free methods [25, 16, 14] due to their close connection with linear feedback controllers, which are frequently used in classic deterministic trajectory optimization. The algorithm we will describe builds on the iterative linear-Gaussian regulator (iLQG), which optimizes trajectories by iteratively constructing locally optimal linear feedback controllers under a local linearization of the dynamics and a quadratic expansion of the cost [15]. Under linear dynamics and quadratic costs, the value or cost-to-go function is quadratic, and can be computed with dynamic programming. The iLQG algorithm alternates between computing the quadratic value function around the current trajectory, and updating the trajectory using a rollout of the corresponding linear feedback controller. We will use subscripts to denote derivatives, so that ℓxut is the derivative of the cost at time step t with respect to (xt, ut)T, ℓxu,xut is the Hessian, ℓxt is the derivative with respect to xt, and so forth. Using N(fxtxt + futut, Ft) to denote the local linear-Gaussian approximation to the dynamics, iLQG computes the first and second derivatives of the Q and value functions as follows: Qxu,xut = ℓxu,xut + f T xutVx,xt+1fxut Qxut = ℓxut + f T xutVxt+1 (1) Vx,xt = Qx,xt −QT u,xtQ−1 u,utQu,x Vxt = Qxt −QT u,xtQ−1 u,utQut 2 The linear controller g(xt) = ˆut + kt + Kt(xt −ˆxt) can be shown to minimize this quadratic Qfunction, where ˆxt and ˆut are the states and actions of the current trajectory, Kt = −Q−1 u,utQu,xt, and kt = −Q−1 u,utQut. We can also construct a linear-Gaussian controller with the mean given by the deterministic optimal solution, and the covariance proportional to the curvature of the Q-function: p(ut|xt) = N(¯ut + kt + Kt(xt −ˆxt), Q−1 u,ut) Prior work has shown that this distribution optimizes a maximum entropy objective [12], given by p(τ) = arg min p(τ)∈N (τ) Ep[ℓ(τ)] −H(p(τ)) s.t. p(xt+1|xt, ut) = N(xt+1; fxtxt + futut, Ft), (2) where H is the differential entropy. This means that the linear-Gaussian controller produces the widest, highest-entropy distribution that also minimizes the expected cost, subject to the linearized dynamics and quadratic cost function. Although this objective differs from the expected cost, it is useful as an intermediate step in algorithms that optimizes the more standard expected cost objective [20, 12]. Our method similarly uses the maximum entropy objective as an intermediate step, and converges to trajectory distribution with the optimal expected cost. However, unlike iLQG, our method operates on systems where the dynamics are unknown. 3 Trajectory Optimization under Unknown Dynamics When the dynamics N(fxtxt + futut, Ft) are unknown, we can estimate them using samples {(xti, uti)T, xt+1i} from the real system under the previous linear-Gaussian controller p(ut|xt), where τi = {x1i, u1i, . . . , xT i, uT i} is the ith rollout. Once we estimate the linear-Gaussian dynamics at each time step, we can simply run the dynamic programming algorithm in the preceding section to obtain a new linear-Gaussian controller. However, the fitted dynamics are only valid in a local region around the samples, while the new controller generated by iLQG can be arbitrarily different from the old one. The fully model-based iLQG method addresses this issue with a line search [23], which is impractical when the rollouts must be stochastically sampled from the real system. Without the line search, large changes in the trajectory will cause the algorithm to quickly fall into unstable, costly parts of the state space, preventing convergence. We address this issue by limiting the change in the trajectory distribution in each dynamic programming pass by imposing a constraint on the KL-divergence between the old and new trajectory distribution. 3.1 KL-Divergence Constraints Under linear-Gaussian controllers, a KL-divergence constraint against the previous trajectory distribution ˆp(τ) can be enforced with a simple modification of the cost function. Omitting the dynamics constraint for clarity, the constrained problem is given by min p(τ)∈N (τ) Ep[ℓ(τ)] s.t. DKL(p(τ)∥ˆp(τ)) ≤ϵ. This type of policy update has previously been proposed by several authors in the context of policy search [1, 19, 17]. The objective of this optimization is the standard expected cost objective, and solving this problem repeatedly, each time setting ˆp(τ) to the last p(τ), will minimize Ep(xt,ut)[ℓ(xt, ut)]. Using η to represent the dual variable, the Lagrangian of this problem is Ltraj(p(τ), η) = Ep[ℓ(τ)] + η[DKL(p(τ)∥ˆp(τ)) −ϵ]. Since p(xt+1|xt, ut) = ˆp(xt+1|, xt, ut) = N(fxtxt + futut, Ft) due to the linear-Gaussian dynamics assumption, the Lagrangian can be rewritten as Ltraj(p(τ), η) = "X t Ep(xt,ut)[ℓ(xt, ut) −η log ˆp(ut|xt)] # −ηH(p(τ)) −ηϵ. Dividing both sides of this equation by η gives us an objective of the same form as Equation (2), which means that under linear dynamics we can minimize the Lagrangian with respect to p(τ) using the dynamic programming algorithm from the preceding section, with an augmented cost function ˜ℓ(xt, ut) = 1 ηℓ(xt, ut) −log ˆp(ut|xt). We can therefore solve the original constrained problem by using dual gradient descent [2], alternating between using dynamic programming to 3 minimize the Lagrangian with respect to p(τ), and adjust the dual variable according to the amount of constraint violation. Using a bracket linesearch with quadratic interpolation [7], this procedure usually converges within a few iterations, especially if we accept approximate constraint satisfaction, for example by stopping when the KL-divergence is within 10% of ϵ. Empirically, we found that the line search tends to require fewer iterations in log space, treating the dual as a function of ν = log η, which also has the convenient effect of enforcing the positivity of η. The dynamic programming pass does not guarantee that Q−1 u,ut, which is the covariance of the linearGaussian controller, will always remain positive definite, since nonconvex cost functions can introduce negative eigenvalues into Equation (1) [23]. To address this issue, we can simply increase η until each Qu,ut becomes positive definite, which is always possible, since the positive definite precision matrix of ˆp(ut|xt), multiplied by η, enters additively into Qu,ut. This might sometimes result in the KL-divergence being lower than ϵ, though this happens rarely in practice. The step ϵ can be adaptively adjusted based on the discrepancy between the improvement in total cost predicted under the linear dynamics and quadratic cost approximation, and the actual improvement, which can be estimated using the new linear dynamics and quadratic cost. Since these quantities only involve expectations of quadratics under Gaussians, they can be computed analytically. The amount of improvement obtained from optimizing p(τ) depends on the accuracy of the estimated dynamics. In general, the sample complexity of this estimation depends on the dimensionality of the state. However, the dynamics at nearby time steps and even successive iterations are correlated, and we can exploit this correlation to reduce the required number of samples. 3.2 Background Dynamics Distribution When fitting the dynamics, we can use priors to greatly reduce the number of samples required at each iteration. While these priors can be constructed using domain knowledge, a more general approach is to construct the prior from samples at other time steps and iterations, by fitting a background dynamics distribution as a kind of crude global model. For physical systems such as robots, a good choice for this distribution is a Gaussian mixture model (GMM), which corresponds to softly piecewise linear dynamics. The dynamics of a robot can be reasonably approximated with such piecewise linear functions [9], and they are well suited for contacts, which are approximately piecewise linear with a hard boundary. If we build a GMM over vectors (xt, ut, xt+1)T, we see that within each cluster ci, the conditional ci(xt+1|xt, ut) represents a linear-Gaussian dynamics model, while the marginal ci(xt, ut) specifies the region of the state-action space where this model is valid. Although the GMM models (softly) piecewise linear dynamics, it is not necessarily a good forward model, since the marginals ci(xt, ut) will not always delineate the correct boundary between two linear modes. In the case of contacts, the boundary might have a complex shape that is not well modeled by a GMM. However, if we use the GMM to obtain a prior for linear regression, it is easy to determine the correct linear mode from the covariance of (xti, uti) with xt+1i in the current samples at time step t. The time-varying linear dynamics can then capture different linear modes at different time steps depending on the actual observed transitions, even if the states are very similar. To use the GMM to construct a prior for the dynamics, we refit the GMM at each iteration to all of the samples at all time steps from the current iteration, as well as several prior interations, in order to ensure that sufficient samples are available. We then estimate the time-varying linear dynamics by fitting a Gaussian to the samples {xti, uti, xt+1i} at each time step, which can be conditioned on (xt, ut)T to obtain linear-Gaussian dynamics. The GMM is used to produce a normal-inverseWishart prior for the mean and covariance of this Gaussian at each time step. To obtain the prior, we infer the cluster weights for the samples at the current time step, and then use the weighted mean and covariance of these clusters as the prior parameters. We found that the best results were produced by large mixtures that modeled the dynamics in high detail. In practice, the GMM allowed us to reduce the samples at each iteration by a factor of 4 to 8, well below the dimensionality of the system. 4 General Parameterized Policies The algorithm in the preceding section optimizes time-varying linear-Gaussian controllers. To learn arbitrary parameterized policies, we combine this algorithm with a guided policy search (GPS) ap4 Algorithm 1 Guided policy search with unknown dynamics 1: for iteration k = 1 to K do 2: Generate samples {τ j i } from each linear-Gaussian controller pi(τ) by performing rollouts 3: Fit the dynamics pi(xt+1|xt, ut) to the samples {τ j i } 4: Minimize P i,tλi,tDKL(pi(xt)πθ(ut|xt)∥pi(xt, ut)) with respect to θ using samples {τ j i } 5: Update pi(ut|xt) using the algorithm in Section 3 and the supplementary appendix 6: Increment dual variables λi,t by αDKL(pi(xt)πθ(ut|xt)∥pi(xt, ut)) 7: end for 8: return optimized policy parameters θ proach. In GPS methods, the parameterized policy is trained in supervised fashion to match samples from a trajectory distribution, and the trajectory distribution is optimized to minimize both its cost and difference from the current policy, thereby creating a good training set for the policy. By turning policy optimization into a supervised problem, GPS algorithms can train complex policies with thousands of parameters [12, 14], and since our trajectory optimization algorithm exploits the structure of linear-Gaussian controllers, it can optimize the individual trajectories with fewer samples than general-purpose model-free methods. As a result, the combined approach can learn complex policies that are difficult to train with prior methods, as shown in our evaluation. We build on the recently proposed constrained GPS algorithm, which enforces agreement between the policy and trajectory by means of a soft KL-divergence constraint [14]. Constrained GPS optimizes the maximum entropy objective Eπθ[ℓ(τ)] −H(πθ), but our trajectory optimization method allows us to use the more standard expected cost objective, resulting in the following optimization: min θ,p(τ) Ep(τ)[ℓ(τ)] s.t. DKL(p(xt)πθ(ut|xt)∥p(xt, ut)) = 0 ∀t. If the constraint is enforced exactly, the policy πθ(ut|xt) is identical to p(ut|xt), and the optimization minimizes the cost under πθ, given by Eπθ[ℓ(τ)]. Constrained GPS enforces these constraints softly, so that πθ and p gradually come into agreement over the course of the optimization. In general, we can use multiple distributions pi(τ), with each trajectory starting from a different initial state or in different conditions, but we will omit the subscript for simplicity, since each pi(τ) is treated identically and independently. The Lagrangian of this problem is given by LGPS(θ, p, λ) = Ep(τ)[ℓ(τ)] + T X t=1 λtDKL(p(xt)πθ(ut|xt)∥p(xt, ut)). The GPS Lagrangian is minimized with respect to θ and p(τ) in alternating fashion, with the dual variables λt updated to enforce constraint satisfaction. Optimizing LGPS with respect to p(τ) corresponds to trajectory optimization, which in our case involves dual gradient descent on Ltraj in Section 3.1, and optimizing with respect θ corresponds to supervised policy optimization to minimize the weighted sum of KL-divergences. The constrained GPS method also uses dual gradient descent to update the dual variables, but we found that in practice, it is unnecessary (and, in the unknown model setting, extremely inefficient) to optimize LGPS with respect to p(τ) and θ to convergence prior to each dual variable update. Instead, we increment the dual variables after each iteration with a multiple α of the KL-divergence (α = 10 works well), which corresponds to a penalty method. Note that the dual gradient descent on Ltraj during trajectory optimization is unrelated to the policy constraints, and is treated as an inner loop black-box optimizer by GPS. Pseudocode for our modified constrained GPS method is provided in Algorithm 1. The policy KLdivergence terms in the objective also necessitate a modified dynamic programming method, which can be found in prior work [14], but the step size constraints are still enforced as described in the preceding section, by modifying the cost. The same samples that are used to fit the dynamics are also used to train the policy, with the policy trained to minimize λtDKL(πθ(ut|xti)∥p(ut|xti)) at each sampled state xti. Further details about this algorithm can be found in the supplementary appendix. Although this method optimizes the expected cost of the policy, due to the alternating optimization, its entropy tends to remain high, since both the policy and trajectory must decrease their entropy together to satisfy the constraint, which requires many alternating steps. To speed up this process, we found it useful to regularize the policy by penalizing its entropy directly, which speeds up convergence and produces more deterministic policies. 5 octopus arm samples target distance 100 200 300 400 500 600 700 800 0 1 2 3 4 5 2D insertion samples target distance 100 200 300 400 500 600 700 800 0 0.2 0.4 0.6 0.8 1 3D insertion samples target distance 100 200 300 400 500 600 700 800 0 0.2 0.4 0.6 0.8 1 swimming samples distance travelled 200 400 600 800 1000 1200 1400 1600 0 1 2 3 4 5 iLQG,ItrueImodel REPSI(100Isamp) REPSI(20I+I500Isamp) CEMI(100Isamp) CEMI(20Isamp) RWRI(100Isamp) RWRI(20Isamp) PILCOI(5Isamp) oursI(20Isamp) oursI(withIGMM,I5Isamp) itr 1 itr 2 itr 4 itr 1 itr 5 itr 10 itr 1 itr 20 itr 40 Figure 1: Results for learning linear-Gaussian controllers for 2D and 3D insertion, octopus arm, and swimming. Our approach uses fewer samples and finds better solutions than prior methods, and the GMM further reduces the required sample count. Images in the lower-right show the last time step for each system at several iterations of our method, with red lines indicating end effector trajectories. 5 Experimental Evaluation We evaluated both the trajectory optimization method and general policy search on simulated robotic manipulation and locomotion tasks. The state consisted of joint angles and velocities, and the actions corresponded to joint torques. The parameterized policies were neural networks with one hidden layer and a soft rectifier nonlinearity of the form a = log(1 + exp(z)), with learned diagonal Gaussian noise added to the outputs to produce a stochastic policy. This policy class was chosen for its expressiveness, to allow the policy to learn a wide range of strategies. However, due to its high dimensionality and nonlinearity, it also presents a serious challenge for policy search methods. The tasks are 2D and 3D peg insertion, octopus arm control, and planar swimming and walking. The insertion tasks require fitting a peg into a narrow slot, a task that comes up, for example, when inserting a key into a keyhole, or assembly with screws or nails. The difficulty stems from the need to align the peg with the slot and the complex contacts between the peg and the walls, which result in discontinuous dynamics. Control in the presence of contacts is known to be challenging, and this experiment is important for ascertaining how well our method can handle such discontinuities. Octopus arm control involves moving the tip of a flexible arm to a goal position [6]. The challenge in this task stems from its high dimensionality: the arm has 25 degrees of freedom, corresponding to 50 state dimensions. The swimming task requires controlling a three-link snake, and the walking task requires a seven-link biped to maintain a target velocity. The challenge in these tasks comes from underactuation. Details of the simulation and cost for each task are in the supplementary appendix. 5.1 Trajectory Optimization Figure 1 compares our method with prior work on learning linear-Gaussian controllers for peg insertion, octopus arm, and swimming (walking is discussed in the next section). The horizontal axis shows the total number of samples, and the vertical axis shows the minimum distance between the end of the peg and the bottom of the slot, the distance to the target for the octopus arm, or the total distance travelled by the swimmer. Since the peg is 0.5 units long, distances above this amount correspond to controllers that cannot perform an insertion. We compare to REPS [17], reward-weighted regression (RWR) [18, 11], the cross-entropy method (CEM) [21], and PILCO [5]. We also use iLQG [15] with a known model as a baseline, shown as a black horizontal line. REPS is a model-free method that, like our approach, enforces a KLdivergence constraint between the new and old policy. We compare to a variant of REPS that also fits linear dynamics to generate 500 pseudo-samples [16], which we label “REPS (20 + 500).” RWR is an EM algorithm that fits the policy to previous samples weighted by the exponential of their reward, and CEM fits the policy to the best samples in each batch. With Gaussian trajectories, CEM and RWR only differ in the weights. These methods represent a class of RL algorithms that fit the policy 6 walking policy samples distance travelled 100 200 300 400 500 600 700 800 0 5 10 15 20 2D insertion policy samples target distance 100 200 300 400 500 600 700 800 0 0.2 0.4 0.6 0.8 1 3D insertion policy samples target distance 100 200 300 400 500 600 700 800 0 0.2 0.4 0.6 0.8 1 swimming policy samples distance travelled 200 400 600 800 1000 1200 1400 1600 0 1 2 3 4 5 CEM (100 samp) CEM (20 samp) RWR (100 samp) RWR (20 samp) ours (20 samp) ours (with GMM, 5 samp) #1 #2 #3 #4 #1 #2 #3 #4 Figure 2: Comparison on neural network policies. For insertion, the policy was trained to search for an unknown slot position on four slot positions (shown above), and generalization to new positions is graphed with dashed lines. Note how the end effector (in red) follows the surface to find the slot, and how the swimming gait is smoother due to the stationary policy (also see supplementary video). to weighted samples, including PoWER and PI2 [11, 24, 22]. PILCO is a model-based method that uses a Gaussian process to learn a global dynamics model that is used to optimize the policy. REPS and PILCO require solving large nonlinear optimizations at each iteration, while our method does not. Our method used 5 rollouts with the GMM, and 20 without. Due to its computational cost, PILCO was provided with 5 rollouts per iteration, while other prior methods used 20 and 100. Our method learned much more effective controllers with fewer samples, especially when using the GMM. On 3D insertion, it outperformed the iLQG baseline, which used a known model. Contact discontinuities cause problems for derivative-based methods like iLQG, as well as methods like PILCO that learn a smooth global dynamics model. We use a time-varying local model, which preserves more detail, and fitting the model to samples has a smoothing effect that mitigates discontinuity issues. Prior policy search methods could servo to the hole, but were unable to insert the peg. On the octopus arm, our method succeeded despite the high dimensionality of the state and action spaces.1 Prior work used simplified “macro-actions” to solve this task, while our method directly controlled each degree of freedom [6]. Our method also successfully learned a swimming gait, while prior model-free methods could not initiate forward motion.2 PILCO also learned an effective gait due to the smooth dynamics of this task, but its GP-based optimization required orders of magnitude more computation time than our method, taking about 50 minutes per iteration. These results suggest that our method combines the sample efficiency of model-based methods with the versatility of model-free techniques. However, this method is designed specifically for linearGaussian controllers. In the next section, we present results for learning more general policies with our method, using the linear-Gaussian controllers within the framework of guided policy search. 5.2 Neural Network Policy Learning with Guided Policy Search By using our method with guided policy search, we can learn arbitrary parameterized policies. Figure 2 shows results for training neural network policies for each task, with comparisons to prior methods that optimize the policy parameters directly.3 On swimming, our method achieved similar performance to the linear-Gaussian case, but since the neural network policy was stationary, the resulting gait was much smoother. Previous methods could only solve this task with 100 samples per iteration, with RWR eventually obtaining a distance of 0.5m after 4000 samples, and CEM reaching 2.1m after 3000. Our method was able to reach such distances with many fewer samples. 1The high dimensionality of the octopus arm made it difficult to run PILCO, though in principle, such methods should perform well on this task given the arm’s smooth dynamics. 2Even iLQG requires many iterations to initiate any forward motion, but then makes rapid progress. This suggests that prior methods were simply unable to get over the initial threshold of initiating forward movement. 3PILCO cannot optimize neural network controllers, and we could not obtain reasonable results with REPS. Prior applications of REPS generally focus on simpler, lower-dimensional policy classes [17, 16]. 7 Generating walking from scratch is extremely challenging even with a known model. We therefore initialize the gait from demonstration, as in prior work [12]. The supplementary website also shows some gaits generated from scratch. To generate the initial samples, we assume that the demonstration can be stabilized with a linear feedback controller. Building such controllers around examples has been addressed in prior work [3]. The RWR and CEM policies were initialized with samples from this controller to provide a fair comparison. The walker used 5 samples per iteration with the GMM, and 40 without it. The graph shows the average distance travelled on rollouts that did not fall, and shows that only our method was able to learn walking policies that succeeded consistently. On the insertion tasks, the neural network was trained to insert the peg without precise knowledge of the position of the hole, making this a partially observed problem. The holes were placed in a region of radius 0.2 units in 2D and 0.1 units in 3D. The policies were trained on four different hole positions, and then tested on four new hole positions to evaluate generalization. The generalization results are shown with dashed lines in Figure 2. The position of the hole was not provided to the neural network, and the policies therefore had to find the hole by “feeling” for it, with only joint angles and velocities as input. Only our method could acquire a successful strategy to locate both the training and test holes, although RWR was eventually able to insert the peg into one of the four holes in 2D. This task illustrates one of the advantages of learning expressive neural network policies, since no single trajectory-based policy can represent such a search strategy. Videos of the learned policies can be viewed at http://rll.berkeley.edu/nips2014gps/. 6 Discussion We presented an algorithm that can optimize linear-Gaussian controllers under unknown dynamics by iteratively fitting local linear dynamics models, with a background dynamics distribution acting as a prior to reduce the sample complexity. We showed that this approach can be used to train arbitrary parameterized policies within the framework of guided policy search, where the parameterized policy is optimized to match the linear-Gaussian controllers. In our evaluation, we show that this method can train complex neural network policies that act intelligently in partially observed environments, even for tasks that cannot be solved with direct model-free policy search. By using local linear models, our method is able to outperform model-free policy search methods. On the other hand, the learned models are highly local and time-varying, in contrast to model-based methods that rely on learning an effective global model [4]. This allows our method to handle even the complicated and discontinuous dynamics encountered in the peg insertion task, which we show present a challenge for model-based methods that use smooth dynamics models [5]. Our approach occupies a middle group between model-based and model-free techniques, allowing it to learn rapidly, while still succeeding in domains where the true model is difficult to learn. Our use of a KL-divergence constraint during trajectory optimization parallels several prior modelfree methods [1, 19, 17, 20, 16]. Trajectory-centric policy learning has also been explored in detail in robotics, with a focus on dynamic movement primitives (DMPs) [8, 24]. Time-varying linearGaussian controllers are in general more expressive, though they incorporate less prior information. DMPs constrain the final state to a goal state, and only encode target states, relying on an existing controller to track those states with suitable controls. The improved performance of our method is due in part to the use of stronger assumptions about the task, compared to general policy search methods. For instance, we assume that time-varying linearGaussians are a reasonable local approximation for the dynamics. While this assumption is sensible for physical systems, it would require additional work to extend to hybrid discrete-continuous tasks. Our method also suggests some promising future directions. Since the parameterized policy is trained directly on samples from the real world, it can incorporate sensory information that is difficult to simulate but useful in partially observed domains, such as force sensors on a robotic gripper, or even camera images, while the linear-Gaussian controllers are trained directly on the true state under known, controlled conditions, as in our peg insertion experiments. This could provide for superior generalization for partially observed tasks that are otherwise extremely challenging to learn. Acknowledgments This research was partly funded by a DARPA Young Faculty Award #D13AP0046. 8 References [1] J. A. Bagnell and J. Schneider. Covariant policy search. In International Joint Conference on Artificial Intelligence (IJCAI), 2003. [2] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, New York, NY, 2004. [3] A. Coates, P. Abbeel, and A. Ng. Learning for control from multiple demonstrations. In International Conference on Machine Learning (ICML), 2008. [4] M. Deisenroth, G. Neumann, and J. Peters. A survey on policy search for robotics. Foundations and Trends in Robotics, 2(1-2):1–142, 2013. [5] M. Deisenroth and C. Rasmussen. PILCO: a model-based and data-efficient approach to policy search. In International Conference on Machine Learning (ICML), 2011. [6] Y. Engel, P. Szab´o, and D. Volkinshtein. Learning to control an octopus arm with Gaussian process temporal difference methods. In Advances in Neural Information Processing Systems (NIPS), 2005. [7] R. Fletcher. Practical Methods of Optimization. Wiley-Interscience, New York, NY, 1987. [8] A. Ijspeert, J. Nakanishi, and S. Schaal. Learning attractor landscapes for learning motor primitives. In Advances in Neural Information Processing Systems (NIPS), 2003. [9] S. M. Khansari-Zadeh and A. Billard. BM: An iterative algorithm to learn stable non-linear dynamical systems with gaussian mixture models. In International Conference on Robotics and Automation (ICRA), 2010. [10] J. Kober, J. A. Bagnell, and J. Peters. Reinforcement learning in robotics: A survey. International Journal of Robotic Research, 32(11):1238–1274, 2013. [11] J. Kober and J. Peters. Learning motor primitives for robotics. In International Conference on Robotics and Automation (ICRA), 2009. [12] S. Levine and V. Koltun. Guided policy search. In International Conference on Machine Learning (ICML), 2013. [13] S. Levine and V. Koltun. Variational policy search via trajectory optimization. In Advances in Neural Information Processing Systems (NIPS), 2013. [14] S. Levine and V. Koltun. Learning complex neural network policies with trajectory optimization. In International Conference on Machine Learning (ICML), 2014. [15] W. Li and E. Todorov. Iterative linear quadratic regulator design for nonlinear biological movement systems. In ICINCO (1), pages 222–229, 2004. [16] R. Lioutikov, A. Paraschos, G. Neumann, and J. Peters. Sample-based information-theoretic stochastic optimal control. In International Conference on Robotics and Automation, 2014. [17] J. Peters, K. M¨ulling, and Y. Alt¨un. Relative entropy policy search. In AAAI Conference on Artificial Intelligence, 2010. [18] J. Peters and S. Schaal. Applying the episodic natural actor-critic architecture to motor primitive learning. In European Symposium on Artificial Neural Networks (ESANN), 2007. [19] J. Peters and S. Schaal. Reinforcement learning of motor skills with policy gradients. Neural Networks, 21(4):682–697, 2008. [20] K. Rawlik, M. Toussaint, and S. Vijayakumar. On stochastic optimal control and reinforcement learning by approximate inference. In Robotics: Science and Systems, 2012. [21] R. Rubinstein and D. Kroese. The Cross-Entropy Method: A Unified Approach to Combinatorial Optimization, Monte-Carlo Simulation and Machine Learning. Springer, 2004. [22] F. Stulp and O. Sigaud. Path integral policy improvement with covariance matrix adaptation. In International Conference on Machine Learning (ICML), 2012. [23] Y. Tassa, T. Erez, and E. Todorov. Synthesis and stabilization of complex behaviors through online trajectory optimization. In IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012. [24] E. Theodorou, J. Buchli, and S. Schaal. Reinforcement learning of motor skills in high dimensions. In International Conference on Robotics and Automation (ICRA), 2010. [25] M. Toussaint. Robot trajectory optimization using approximate inference. In International Conference on Machine Learning (ICML), 2009. 9
|
2014
|
153
|
5,240
|
Optimizing F-Measures by Cost-Sensitive Classification Shameem A. Puthiya Parambath, Nicolas Usunier, Yves Grandvalet Universit´e de Technologie de Compi`egne – CNRS, Heudiasyc UMR 7253 Compi`egne, France {sputhiya,nusunier,grandval}@utc.fr Abstract We present a theoretical analysis of F-measures for binary, multiclass and multilabel classification. These performance measures are non-linear, but in many scenarios they are pseudo-linear functions of the per-class false negative/false positive rate. Based on this observation, we present a general reduction of Fmeasure maximization to cost-sensitive classification with unknown costs. We then propose an algorithm with provable guarantees to obtain an approximately optimal classifier for the F-measure by solving a series of cost-sensitive classification problems. The strength of our analysis is to be valid on any dataset and any class of classifiers, extending the existing theoretical results on F-measures, which are asymptotic in nature. We present numerical experiments to illustrate the relative importance of cost asymmetry and thresholding when learning linear classifiers on various F-measure optimization tasks. 1 Introduction The F1-measure, defined as the harmonic mean of the precision and recall of a binary decision rule [20], is a traditional way of assessing the performance of classifiers. As it favors high and balanced values of precision and recall, this performance metric is usually preferred to (label-dependent weighted) classification accuracy when classes are highly imbalanced and when the cost of a false positive relatively to a false negative is not naturally given for the problem at hand. The design of methods to optimize F1-measure and its variants for multilabel classification (the micro-, macro-, per-instance-F1-measures, see [23] and Section 2), and the theoretical analysis of the optimal classifiers for such metrics have received considerable interest in the last 3-4 years [6, 15, 4, 18, 5, 13], especially because rare classes appear naturally on most multilabel datasets with many labels. The most usual way of optimizing F1-measure is to perform a two-step approach in which first a classifier which output scores (e.g. a margin-based classifier) is learnt, and then the decision threshold is tuned a posteriori. Such an approach is theoretically grounded in binary classification [15] and for micro- or macro-F1-measures of multilabel classification [13] in that a Bayes-optimal classifier for the corresponding F1-measure can be obtained by thresholding posterior probabilities of classes (the threshold, however, depends on properties of the whole distribution and cannot be known in advance). Thus, such arguments are essentially asymptotic since the validity of the procedure is bound to the ability to accurately estimate all the level sets of the posterior probabilities; in particular, the proof does not hold if one wants to find the optimal classifier for the F1-measure over an arbitrary set of classifiers (e.g. thresholded linear functions). In this paper, we show that optimizing the F1-measure in binary classification over any (possibly restricted) class of functions and over any data distribution (population-level or on a finite sample) can be reduced to solving an (infinite) series of cost-sensitive classification problems, but the cost space can be discretized to obtain approximately optimal solutions. For binary classification, as well as for multilabel classification (micro-F1-measure in general and the macro-F1-measure when training independent classifiers per class), the discretization can be made along a single real-valued 1 variable in [0, 1] with approximation guarantees. Asymptotically, our result is, in essence, equivalent to prior results since Bayes-optimal classifiers for cost-sensitive classification are precisely given by thresholding the posterior probabilities, and we recover the relationship between the optimal F1measure and the optimal threshold given by Lipton et al. [13]. Our reduction to cost-sensitive classification, however, is strictly more general. Our analysis is based on the pseudo-linearity of the F1-scores (the level sets, as function of the false negative rate and the false positive rate are linear) and holds in any asymptotic or non-asymptotic regime, with any arbitrary set of classifiers (without the requirement to output scores or accurate posterior probability estimates). Our formal framework and the definition of pseudo-linearity is presented in the next section, and the reduction to cost-sensitive classification is presented in Section 2. While our main contribution is the theoretical part, we also turn out to the practical suggestions of our results. In particular, they suggest that, for binary classification, learning cost-sensitive classifiers may be more effective than thresholding probabilities. This is in-line with Musicant et al. [14], although their argument only applies to SVM and does not consider the F1-measure itself but a continuous, non-convex approximation of it. Some experimental results are presented in Section 4, before the conclusion of the paper. 2 Pseudo-Linearity and F-Measures Our results are mainly motivated by the maximization of F-measures for binary and multilabel classification. They are based on a general property of these performance metrics, namely their pseudo-linearity with respect to the false negative/false positive probabilities. For binary classification, the results we prove in Section 3 are that in order to optimize the Fmeasure, it is sufficient to solve a binary classification problem with different costs allocated to false positive and false negative errors (Proposition 4). However, these costs are not known a priori, so in practice we need to learn several classifiers with different costs, and choose the best one (according to the F-score) in a second step. Propositions 5 and 6 provide approximation guarantees on the F-score we can obtain by following this principle depending on the granularity of the search in the cost space. Our results are not specific to the F1-measure in binary classification, and they naturally extend to other cases of F-measures with similar functional forms. For that reason, we present the results and prove them directly for the general case, following the framework that we describe in this section. We first present the machine learning framework we consider, and then give the general definition of pseudo-convexity. Then, we provide examples of F-measures for binary, multilabel and multiclass classification and we show how they fit into this framework. 2.1 Notation and Definitions We are given (i) a measurable space X ×Y, where X is the input space and Y is the (finite) prediction set, (ii) a probability measure µ over X × Y, and (iii) a set of (measurable) classifiers H from the input space X to Y. We distinguish here the prediction set Y from the label space L = {1, ..., L}: in binary or single-label multi-class classification, the prediction set Y is the label set L, but in multilabel classification, Y = 2L is the powerset of the set of possible labels. In that framework, we assume that we have an i.i.d. sample drawn from an underlying data distribution P on X × Y. The empirical distribution of this finite training (or test) sample will be denoted ˆP. Then, we may take µ = P to get results at the population level (concerning expected errors), or we may take µ = ˆP to get results on a finite sample. Likewise, H can be a restricted set of functions such as linear classifiers if X is a finite-dimensional vector space, or may be the set of all measurable classifiers from X to Y to get results in terms of Bayes-optimal predictors. Finally, when needed, we will use bold characters for vectors and normal font with subscript for indexing. Throughout the paper, we need the notion of pseudo-linearity of a function, which itself is defined from the notion of pseudo-convexity (see e.g. [3, Definition 3.2.1]): a differentiable function F : D ⊂Rd →R, defined on a convex open subset of Rd, is pseudo-convex if ∀e, e′ ∈D , F(e) > F(e′) ⇒ ⟨∇F(e), e′ −e⟩< 0 , where ⟨., .⟩is the canonical dot product on Rd. 2 Moreover, F is pseudo-linear if both F and −F are pseudo-convex. The important property of pseudo-linear functions is that their level sets are hyperplanes (intersected with the domain), and that sublevel and superlevel sets are half-spaces, all of these hyperplanes being defined by the gradient. In practice, working with gradients of non-linear functions may be cumbersome, so we will use the following characterization, which is a rephrasing of [3, Theorem 3.3.9]: Theorem 1 ([3]) A non-constant function F : D →R, defined and differentiable on the open convex set D ⊆Rd, is pseudo-linear on D if and only if ∀e ∈D , ∇F(e) ̸= 0 , and: ∃a : R →Rd and ∃b: R →R such that, for any t in the image of F: F(e) ≥t ⇔ ⟨a(t), e⟩+ b(t) ≤0 and F(e) ≤t ⇔ ⟨a(t) , e⟩+ b(t) ≥0 . Pseudo-linearity is the main property of fractional-linear functions (ratios of linear functions). Indeed, let us consider F : e ∈Rd 7→(α + ⟨β, e⟩)/(γ + ⟨δ, e⟩) with α, γ ∈R and β and δ in Rd. If we restrict the domain of F to the set {e ∈Rd|γ + ⟨δ, e⟩> 0}, then, for all t in the image of F and all e in its domain, we have: F(e) ≤t ⇔⟨tδ −β, e⟩+ tγ −α ≥0 , and the analogous equivalence obtained by reversing the inequalities holds as well; the function thus satisfies the conditions of Theorem 1. As we shall see, many F-scores can be written as fractional-linear functions. 2.2 Error Profiles and F-Measures For all classification tasks (binary, multiclass and multilabel), the F-measures we consider are functions of per-class recall and precision, which themselves are defined in terms of the marginal probabilities of classes and the per-class false negative/false positive probabilities. The marginal probabilities of label k will be denoted by Pk, and the per-class false negative/false positive probabilities of a classifier h are denoted by FNk(h) and FPk(h). Their definitions are given below: (binary/multiclass) Pk = µ({(x, y)|y = k}), FNk(h) = µ({(x, y)|y = k and h(x) ̸= k}) , FPk(h) = µ({(x, y)|y ̸= k and h(x) = k}) . (multilabel) Pk = µ({(x, y)|y ∈k}), FNk(h) = µ({(x, y)|k ∈y and k ̸∈h(x)}) , FPk(h) = µ({(x, y)|y ̸∈k and k ∈h(x)}) . These probabilities of a classifier h are then summarized by the error profile E(h): E(h) = FN1(h) , FP1(h) , ..., FNL(h) , FPL(h) ∈R2L , so that e2k−1 is the false negative probability for class k and e2k is the false positive probability. Binary Classification In binary classification, we have FN2 = FP1 and we write F-measures only by reference to class 1. Then, for any β > 0 and any binary classifier h, the Fβ-measure is Fβ(h) = (1 + β2)(P1 −FN1(h)) (1 + β2)P1 −FN1(h) + FP1(h) . The F1-measure, which is the most widely used, corresponds to the case β = 1. We can immediately notice that Fβ is fractional-linear, hence pseudo-convex, with respect to FN1 and FP1. Thus, with a slight (yet convenient) abuse of notation, we write the Fβ-measure for binary classification as a function of vectors in R4 = R2L which represent error profiles of classifiers: (binary) ∀e ∈R4, Fβ(e) = (1 + β2)(P1 −e1) (1 + β2)P1 −e1 + e2 . Multilabel Classification In multilabel classification, there are several definitions of F-measures. For those based on the error profiles, we first have the macro-F-measures (denoted by MFβ), which is the average over class labels of the Fβ-measures of each binary classification problem associated to the prediction of the presence/absence of a given class: (multilabel–Macro) MFβ(e) = 1 L L X k=1 (1 + β2)(P −e2k−1) (1 + β2)P −e2k−1 + e2k . 3 MFβ is not a pseudo-linear function of an error profile e. However, if the multi-label classification algorithm learns independent binary classifiers for each class (a method known as one-vs-rest or binary relevance [23]), then each binary problem becomes independent and optimizing the macroF-score boils down to independently maximizing the Fβ-score for L binary classification problems, so that optimizing MFβ is similar to optimizing Fβ in binary classification. There are also micro-F-measures for multilabel classification. They correspond to Fβ-measures for a new binary classification problem over X × L, in which one maps a multilabel classifier h: X →Y (Y is here the power set of L) to the following binary classifier ˜h: X × L →{0, 1}: we have ˜h(x, k) = 1 if k ∈h(x), and 0 otherwise. The micro-Fβ-measure, written as a function of an error profile e and denoted by mFβ(e), is the Fβ-score of ˜h and can be written as: (multilabel–micro) mFβ(e) = (1 + β2) PL k=1(Pk −e2k−1) (1 + β2) PL k=1 Pk + PL k=1(e2k −e2k−1) . This function is also fractional-linear, and thus pseudo-linear as a function of e. A third notion of Fβ-measure can be used in multilabel classification, namely the per-instance Fβ studied e.g. by [16, 17, 6, 4, 5]. The per-instance Fβ is defined as the average, over instances x, of the binary Fβ-measure for the problem of classifying labels given x. This corresponds to a specific Fβ-maximization problem for each x and is not directly captured by our framework, because we would need to solve different cost-sensitive classification problems for each instance. Multiclass Classification The last example we take is from multiclass classification. It differs from multilabel classification in that a single class must be predicted for each example. This restriction imposes strong global constraints that make the task significantly harder. As for the multillabel case, there are many definitions of F-measures for multiclass classification, and in fact several definitions for the micro-F-measure itself. We will focus on the following one, which is used in information extraction (e.g. in the BioNLP challenge [12]). Given L class labels, we will assume that label 1 corresponds to a “default” class, the prediction of which is considered as not important. In information extraction, the “default” class corresponds to the (majority) case where no information should be extracted. Then, a false negative is an example (x, y) such that y ̸= 1 and h(x) ̸= y, while a false positive is an example (x, y) such that y = 1 and h(x) ̸= y. This micro-F-measure, denoted mcFβ can be written as: (multiclass–micro) mcFβ(e) = (1 + β2)(1 −P1 −PL k=2 e2k−1) (1 + β2)(1 −P1) −PL k=2 e2k−1 + e1 . Once again, this kind of micro-Fβ-measure is pseudo-linear with respect to e. Remark 2 (Training and generalization performance) Our results concern a fixed distribution µ, while the goal is to find a classifier with high generalization performance. With our notation, our results apply to µ = P or µ = ˆP, and our implicit goal is to perform empirical risk minimizationtype learning, that is, to find a classifier with high value of F P β EP(h) by maximizing its empirical counterpart F ˆP β EˆP(h) (the superscripts here make the underlying distribution explicit). Remark 3 (Expected Utility Maximization (EUM) vs Decision-Theoretic Approach (DTA)) Nan et al. [15] propose two possible definitions of the generalization performance in terms of Fβ-scores. In the first framework, called EUM, the population-level Fβ-score is defined as the Fβ-score of the population-level error profiles. In contrast, the Decision-Theoretic approach defines the population-level Fβ-score as the expected value of the Fβ-score over the distribution of test sets. The EUM definition of generalization performance matches our framework using µ = P: in that sense, we follow the EUM framework. Nonetheless, regardless of how we define the generalization performance, our results can be used to maximize the empirical value of the Fβ-score. 3 Optimizing F-Measures by Reduction to Cost-Sensitive Classification The F-measures presented above are non-linear aggregations of false negative/positive probabilities that cannot be written in the usual expected loss minimization framework; usual learning algorithms are thus, intrinsically, not designed to optimize this kind of performance metrics. 4 In this section, we show in Proposition 4 that the optimal classifier for a cost-sensitive classification problem with label dependent costs [7, 24] is also an optimal classifier for the pseudo-linear Fmeasures (within a specific, yet arbitrary classifier set H). In cost-sensitive classification, each entry of the error profile is weighted by a non-negative cost, and the goal is to minimize the weighted average error. Efficient, consistent algorithms exist for such cost-sensitive problems [1, 22, 21]. Even though the costs corresponding to the optimal F-score are not known a priori, we show in Proposition 5 that we can approximate the optimal classifier with approximate costs. These costs, explicitly expressed in terms of the optimal F-score, motivate a practical algorithm. 3.1 Reduction to Cost-Sensitive Classification In this section, F : D ⊂Rd →R is a fixed pseudo-linear function. We denote by a : R →Rd the function mapping values of F to the corresponding hyperplane of Theorem 1. We assume that the distribution µ is fixed, as well as the (arbitrary) set of classifier H. We denote by E (H) the closure of the image of H under E, i.e. E (H) = cl({E(h) , h ∈H}) (the closure ensures that E (H) is compact and that minima/maxima are well-defined), and we assume E (H) ⊆D. Finally, for the sake of discussion with cost-sensitive classification, we assume that a(t) ∈Rd + for any e ∈E (H), that is, lower values of errors entail higher values of F. Proposition 4 Let F ⋆= max e′∈E(H) F(e′). We have: e ∈argmin e′∈E(H) a F ⋆ , e′ ⇔F(e) = F ⋆ Proof Let e⋆∈argmaxe′∈E(H) F(e′), and let a⋆= a(F(e⋆)) = a F ⋆ . We first notice that pseudo-linearity implies that the set of e ∈D such that ⟨a⋆, e⟩= ⟨a⋆, e⋆⟩corresponds to the level set {e ∈D|F(e) = F(e⋆) = F ⋆}. Thus, we only need to show that e⋆is a minimizer of e′ 7→⟨a⋆, e′⟩in E (H). To see this, we notice that pseudo-linearity implies ∀e′ ∈D, F(e⋆) ≥F(e′) ⇒⟨a⋆, e⋆⟩≤⟨a⋆, e′⟩ from which we immediately get e⋆∈argmine′∈E(H) ⟨a⋆, e′⟩since e⋆maximizes F in E (H). □ The proposition shows that a F ⋆ are the costs that should be assigned to the error profile in order to find the F-optimal classifier in H. Hence maximizing F amounts to minimizing a F ⋆ , E(h) with respect to h, that is, amounts to solving a cost-sensitive classification problem. The costs a F ⋆ are, however, not known a priori (because F ⋆is not known in general). The following result shows that having only approximate costs is sufficient to have an approximately F-optimal solution, which gives us the main step towards a practical solution: Proposition 5 Let ε0 ≥0 and ε1 ≥0, and assume that there exists Φ > 0 such that for all e, e′ ∈E (H) satisfying F(e′) > F(e), we have: F(e′) −F(e) ≤Φ ⟨a(F(e′)) , e −e′⟩. (1) Then, let us take e⋆∈argmaxe′∈E(H) F(e′), and denote a⋆= a(F(e⋆)). Let furthermore g ∈Rd + and h ∈H satisfying the two following conditions: (i) ∥g −a⋆∥2≤ε0 (ii) ⟨g, E(h)⟩≤ min e′∈E(H) ⟨g, e′⟩+ ε1 . We have: F(E(h)) ≥F(e⋆) −Φ · (2ε0M + ε1) , where M = max e′∈E(H) ∥e′ ∥2. Proof Let e′ ∈E (H). By writing ⟨g, e′⟩= ⟨g −a⋆, e′⟩+ ⟨a⋆, e′⟩and applying Cauchy-Schwarz inequality to ⟨g −a⋆, e′⟩we get ⟨g, e′⟩≤⟨a⋆, e′⟩+ ε0M using condition (i). Consequently min e′∈E(H) ⟨g, e′⟩≤ min e′∈E(H) ⟨a⋆, e′⟩+ ε0M = ⟨a⋆, e⋆⟩+ ε0M (2) Where the equality is given by Proposition 4. Now, let e = E(h), assuming that classifier h satisfies condition (ii). Using ⟨a⋆, e⟩= ⟨a⋆−g, e⟩+ ⟨g, e⟩and Cauchy-Shwarz, we obtain: ⟨a⋆, e⟩≤⟨g, e⟩+ ε0M ≤ min e′∈E(H) ⟨g, e′⟩+ ε1 + ε0M ≤⟨a⋆, e⋆⟩+ ε1 + 2ε0M , where the first inequality comes from condition (ii) and the second inequality comes from (2). The final result is obtained by plugging this inequality into (1). □ 5 Before discussing this result, we first give explicit values of a and Φ for pseudo-linear F-measures: Proposition 6 Fβ, mFβ and mcFβ defined in Section 2 satisfy the conditions of Proposition 5 with: (binary) Fβ: Φ = 1 β2P1 and a : t ∈[0, 1] 7→(1+β2 −t, t, 0, 0) . (multilabel–micro) mFβ: Φ = 1 β2 PL k=1 Pk and ai(t) = 1 + β2 −t if i is odd t if i is even . (multiclass–micro) mcFβ: Φ = 1 β2(1 −P1) and ai(t) = 1 + β2 −t if i is odd and i ̸= 1 t if i = 1 0 otherwise . The proof is given in the longer version of the paper, and the values of Φ and a are valid for any set of classifiers H. Note that the result on Fβ for binary classification can be used for the macro-Fβmeasure in multilabel classification when training one binary classifier per label. Also, the relative costs (1+β2−t) for false negative and t for false positive imply that for the F1-measure, the optimal classifier is the solution of the cost-sensitive binary problem with costs (1 −F ⋆/2), F ⋆/2. If we take H as the set of all measurable functions, the Bayes-optimal classifier for this cost is to predict class 1 when µ(y = 1|x) ≥F ⋆/2 (see e.g. [22]). Our propositions thus extends this known result [13] to the non-asymptotic regime and to an arbitrary set of classifiers. 3.2 Practical Algorithm Our results suggests that the optimization of pseudo-linear F-measures should wrap cost-sensitive classification algorithms, used in an inner loop, by an outer loop setting the appropriate costs. In practice, since the function a : [0, 1] →Rd, which assigns costs to probabilities of error, is Lipschitz-continuous (with constant 2 on our examples), it is sufficient to discretize the interval [0, 1] to have a set of evenly spaced values {t1, ..., tC} (say, tj+1 −tj = ε0/2) to obtain an ε0-cover {a(t1), ..., a(tC)} of the possible costs. Using the approximate guarantee of Proposition 5, learning a cost-sensitive classifier for each a(ti) and selecting the one with optimal F-measure a posteriori is sufficient to obtain a MΦ(2ε0 + ε1)-optimal solution, where ε1 is the approximation guarantee of the cost-sensitive classification algorithm. This meta-algorithm can be instantiated with any learning algorithm and different F-measures. In our experiments of Section 4, we first use it with cost-sensitive binary classification algorithms: Support Vector Machines (SVMs) and logistic regression, both with asymmetric costs [2], to optimize the F1-measure in binary classification and the macro-F1-score in multilabel classification (training one-vs-rest classifiers). Musicant et al. [14] also advocated for SVMs with asymmetric costs for F1-measure optimization in binary classification. However, their argument, specific to SVMs, is not methodological but technical (relaxation of the maximization problem). 4 Experiments The goal of this section is to give illustration of the algorithms suggested by the theory. First, our results suggest that cost-sensitive classification algorithms may be preferable to the more usual probability thresholding method. We compare cost-sensitive classification, as implemented by SVMs with asymmetric costs, to thresholded logistic regression, with linear classifiers. Besides, the structured SVM approach to F1-measure maximization SVMperf [11] provides another baseline. For completeness, we also report results for thresholded SVMs, cost-sensitive logistic regression, and for the thresholded versions of SVMperf and the cost-sensitive algorithms (a thresholded algorithm means that the decision threshold is tuned a posteriori by maximizing the F1-score on the validation set). Cost-sensitive SVMs and logistic regression (LR) differ in the loss they optimize (weighted hinge loss for SVMs, weighted log-loss for LR), and even though both losses are calibrated in the costsensitive setting (that is, converging toward a Bayes-optimal classifier as the number of examples and the capacity of the class of function grow to infinity) [22], they behave differently on finite datasets or with restricted classes of functions. We may also note that asymptotically, the Bayes-classifier for 6 before thresholding after thresholding x2 −3 −2 −1 0 1 2 0 1 2 3 4 x2 −3 −2 −1 0 1 2 0 1 2 3 4 x1 x1 Figure 1: Decision boundaries for the galaxy dataset before and after thresholding the classifier scores of SVMperf (dotted, blue), cost-sensitive SVM (dot-dashed, cyan), logistic regression (solid, red), and cost-sensitive logistic regression (dashed, green). The horizontal black dotted line is an optimal decision boundary. a cost-sensitive binary classification problem is a classifier which thresholds the posterior probability of being class 1. Thus, all methods but SVMperf are asymptotically equivalent, and our goal here is to analyze their non-asymptotic behavior on a restricted class of functions. Although our theoretical developments do not indicate any need to threshold the scores of classifiers, the practical benefits of a post-hoc adjustment of these scores can be important in terms of F1measure maximization. The reason is that the decision threshold given by cost-sensitive SVMs or logistic regression might not be optimal in terms of the cost-sensitive 0/1-error, as already noted in cost-sensitive learning scenarios [10, 2]. This is illustrated in Figure 1, on the didactic “Galaxy” distribution, consisting in four clusters of 2D-examples, indexed by z ∈{1, 2, 3, 4}, with prior probability P(z = 1) = 0.01, P(z = 2) = 0.1, P(z = 3) = 0.001, and P(z = 4) = 0.889, with respective class conditional probabilities P(y = 1|z = 1) = 0.9, P(y = 1|z = 2) = 0.09, P(y = 1|z = 3) = 0.9, and P(y = 1|z = 4) = 0. We drew a very large sample (100,000 examples) from the distribution, whose optimal F1-measure is 67.5%. Without tuning the decision threshold of the classifiers, the best F1-measure among the classifiers is 55.3%, obtained by SVMperf, whereas tuning thresholds enables to reach the optimal F1-measure for SVMperf and cost-sensitive SVM. On the other hand, LR is severely affected by the non-linearity of the level sets of the posterior probability distribution, and does not reach this limit (best F1-score of 48.9%). Note also that even with this very large sample size, the SVM and LR classifiers are very different. The datasets we use are Adult (binary classification, 32,561/16,281 train/test ex., 123 features), Letter (single label multiclass, 26 classes, 20,000 ex., 16 features), and two text datasets: the 20 Newsgroups dataset News201 (single label multiclass, 20 classes, 15,935/3,993 train/test ex., 62,061 features, scaled version) and Siam2 (multilabel, 22 classes, 21,519/7,077 train/test ex., 30,438 features). All datasets except for News20 and Siam are obtained from the UCI repository3. For each experiment, the training set was split at random, keeping 1/3 for the validation set used to select all hyper-parameters, based on the maximization of the F1-measure on this set. For datasets that do not come with a separate test set, the data was first split to keep 1/4 for test. The algorithms have from one to three hyper-parameters: (i) all algorithms are run with L2 regularization, with a regularization parameter C ∈{2−6, 2−5, ..., 26}; (ii) for the cost-sensitive algorithms, the cost for false negatives is chosen in { 2−t t , t ∈{0.1, 0.2, ..., 1.9}} of Proposition 6 4; (iii) for the thresholded algorithms, the threshold is chosen among all the scores of the validation examples. 1http://www.csie.ntu.edu.tw/˜cjlin/libsvmtools/datasets/multiclass. html#news20 2http://www.csie.ntu.edu.tw/˜cjlin/libsvmtools/datasets/multilabel. html#siam-competition2007 3https://archive.ics.uci.edu/ml/datasets.html 4We take t greater than 1 in case the training asymmetry would be different from the true asymmetry [2]. 7 Table 1: (macro-)F1-measures (in %). Options: T stands for thresholded, CS for cost-sensitive and CS&T for cost-sensitive and thresholded. Baseline SVMperf SVMperf SVM SVM SVM LR LR LR Options – T T CS CS&T T CS CS&T Adult 67.3 67.9 67.8 67.9 67.8 67.8 67.9 67.8 Letter 52.5 60.8 63.1 63.2 63.8 61.2 59.9 62.1 News20 59.5 78.7 82.0 81.7 82.4 81.2 81.1 81.5 Siam 49.4 52.8 52.6 51.9 54.9 53.9 53.8 54.4 The library LibLinear [9] was used to implement SVMs5 and Logistic Regression (LR). A constant feature with value 100 was added to each dataset to mimic an unregularized offset. The results, averaged over five random splits, are reported in Table 1. As expected, the difference between methods is less extreme than on the artificial “Galaxy” dataset. The Adult dataset is an example where all methods perform nearly identically; the surrogate loss used in practice seems unimportant. On the other datasets, we observe that thresholding has a rather large impact, and especially for SVMperf; this is also true for the other classifiers: the unthresholded SVM and LR with symmetric costs (unreported here) were not competitive as well. The cost-sensitive (thresholded) SVM outperforms all other methods, as suggested by the theory. It is probably the method of choice when predictive performance is a must. On these datasets, thresholded LR behaves reasonably well considering its relatively low computational cost. Indeed, LR is much faster than SVM: in their thresholded cost-sensitive versions, the timings for LR on News20 and Siam datasets are 6,400 and 8,100 seconds, versus 255,000 and 147,000 seconds for SVM respectively. Note that we did not try to optimize the running time in our experiments. In particular, considerable time savings could be achieved by using warm-start. 5 Conclusion We presented an analysis of F-measures, leveraging the property of pseudo-linearity of some of them to obtain a strong non-asymptotic reduction to cost-sensitive classification. The results hold for any dataset and for any class of function. Our experiments on linear functions confirm theory, by demonstrating the practical interest of using cost-sensitive classification algorithms rather than using a simple probability thresholding. However, they also reveal that, for F-measure maximization, thresholding the solutions provided by cost-sensitive algorithms further improves performances. Algorithmically and empirically, we only explored the simplest case of our result (Fβ-measure in binary classification and macro-Fβ-measure in multilabel classification), but much more remains to be done. First, the strategy we use for searching the optimal costs is a simple uniform discretization procedure, and more efficient exploration techniques could probably be developped. Second, algorithms for the optimization of the micro-Fβ-measure in multilabel classification received interest recently as well [8, 19], but are for now limited to the selection of threshold after any kind of training. New methods for that measure may be designed from our reduction; we also believe that our result can lead to progresses towards optimizing the micro-Fβ measure in multiclass classification. Acknowledgments This work was carried out and funded in the framework of the Labex MS2T. It was supported by the Picardy Region and the French Government, through the program “Investments for the future” managed by the National Agency for Research (Reference ANR-11-IDEX-0004-02). References [1] N. Abe, B. Zadrozny, and J. Langford. An iterative method for multi-class cost-sensitive learning. In W. Kim, R. Kohavi, J. Gehrke, and W. DuMouchel, editors, KDD, pages 3–11. ACM, 2004. [2] F. R. Bach, D. Heckerman, and E. Horvitz. Considering cost asymmetry in learning classifiers. J. Mach. Learn. Res., 7:1713–1741, December 2006. 5The maximum number of iteration for SVMs was set to 50,000 instead of the default 1,000. 8 [3] A. Cambini and L. Martein. Generalized Convexity and Optimization, volume 616 of Lecture Notes in Economics and Mathematical Systems. Springer, 2009. [4] W. Cheng, K. Dembczynski, E. H¨ullermeier, A. Jaroszewicz, and W. Waegeman. F-measure maximization in topical classification. In J. Yao, Y. Yang, R. Slowinski, S. Greco, H. Li, S. Mitra, and L. Polkowski, editors, RSCTC, volume 7413 of Lecture Notes in Computer Science, pages 439–446. Springer, 2012. [5] K. Dembczynski, A. Jachnik, W. Kotlowski, W. Waegeman, and E. H¨ullermeier. Optimizing the Fmeasure in multi-label classification: Plug-in rule approach versus structured loss minimization. In S. Dasgupta and D. Mcallester, editors, Proceedings of the 30th International Conference on Machine Learning (ICML-13), volume 28, pages 1130–1138. JMLR Workshop and Conference Proceedings, May 2013. [6] K. Dembczynski, W. Waegeman, W. Cheng, and E. H¨ullermeier. An exact algorithm for F-measure maximization. In J. Shawe-Taylor, R. S. Zemel, P. L. Bartlett, F. C. N. Pereira, and K. Q. Weinberger, editors, NIPS, pages 1404–1412, 2011. [7] C. Elkan. The foundations of cost-sensitive learning. In International Joint Conference on Artificial Intelligence, volume 17, pages 973–978, 2001. [8] R. E. Fan and C. J. Lin. A study on threshold selection for multi-label classification. Technical report, National Taiwan University, 2007. [9] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. Liblinear: A library for large linear classification. The Journal of Machine Learning Research, 9:1871–1874, 2008. [10] Y. Grandvalet, J. Mari´ethoz, and S. Bengio. A probabilistic interpretation of SVMs with an application to unbalanced classification. In NIPS, 2005. [11] T. Joachims. A support vector method for multivariate performance measures. In Proceedings of the 22nd International Conference on Machine Learning, pages 377–384. ACM Press, 2005. [12] J.-D. Kim, Y. Wang, and Y. Yasunori. The genia event extraction shared task, 2013 edition - overview. In Proceedings of the BioNLP Shared Task 2013 Workshop, pages 8–15, Sofia, Bulgaria, August 2013. Association for Computational Linguistics. [13] Z. C. Lipton, C. Elkan, and B. Naryanaswamy. Optimal thresholding of classifiers to maximize F1 measure. In T. Calders, F. Esposito, E. H¨ullermeier, and R. Meo, editors, Machine Learning and Knowledge Discovery in Databases, volume 8725 of Lecture Notes in Computer Science, pages 225–239. Springer, 2014. [14] D. R. Musicant, V. Kumar, and A. Ozgur. Optimizing F-measure with support vector machines. In Proceedings of the FLAIRS Conference, pages 356–360, 2003. [15] Y. Nan, K. M. A. Chai, W. S. Lee, and H. L. Chieu. Optimizing F-measures: A tale of two approaches. In ICML. icml.cc / Omnipress, 2012. [16] J. Petterson and T. S. Caetano. Reverse multi-label learning. In NIPS, volume 1, pages 1912–1920, 2010. [17] J. Petterson and T. S. Caetano. Submodular multi-label learning. In NIPS, pages 1512–1520, 2011. [18] I. Pillai, G. Fumera, and F. Roli. F-measure optimisation in multi-label classifiers. In ICPR, pages 2424– 2427. IEEE, 2012. [19] I. Pillai, G. Fumera, and F. Roli. Threshold optimisation for multi-label classifiers. Pattern Recogn., 46(7):2055–2065, July 2013. [20] C. J. V. Rijsbergen. Information Retrieval. Butterworth-Heinemann, Newton, MA, USA, 2nd edition, 1979. [21] C. Scott. Calibrated asymmetric surrogate losses. Electronic Journal of Statistics, 6:958–992, 2012. [22] I. Steinwart. How to compare different loss functions and their risks. Constructive Approximation, 26(2):225–287, 2007. [23] G. Tsoumakas and I. Katakis. Multi-label classification: An overview. International Journal of Data Warehousing and Mining (IJDWM), 3(3):1–13, 2007. [24] Z.-H. Zhou and X.-Y. Liu. On multi-class cost-sensitive learning. Computational Intelligence, 26(3):232– 257, 2010. 9
|
2014
|
154
|
5,241
|
Distributed Power-law Graph Computing: Theoretical and Empirical Analysis Cong Xie Dept. of Comp. Sci. and Eng. Shanghai Jiao Tong University 800 Dongchuan Road Shanghai 200240, China xcgoner1108@gmail.com Ling Yan Dept. of Comp. Sci. and Eng. Shanghai Jiao Tong University 800 Dongchuan Road Shanghai 200240, China yling0718@sjtu.edu.cn Wu-Jun Li National Key Lab. for Novel Software Tech. Dept. of Comp. Sci. and Tech. Nanjing University Nanjing 210023, China liwujun@nju.edu.cn Zhihua Zhang Dept. of Comp. Sci. and Eng. Shanghai Jiao Tong University 800 Dongchuan Road Shanghai 200240, China zhang-zh@cs.sjtu.edu.cn Abstract With the emergence of big graphs in a variety of real applications like social networks, machine learning based on distributed graph-computing (DGC) frameworks has attracted much attention from big data machine learning community. In DGC frameworks, the graph partitioning (GP) strategy plays a key role to affect the performance, including the workload balance and communication cost. Typically, the degree distributions of natural graphs from real applications follow skewed power laws, which makes GP a challenging task. Recently, many methods have been proposed to solve the GP problem. However, the existing GP methods cannot achieve satisfactory performance for applications with power-law graphs. In this paper, we propose a novel vertex-cut method, called degree-based hashing (DBH), for GP. DBH makes effective use of the skewed degree distributions for GP. We theoretically prove that DBH can achieve lower communication cost than existing methods and can simultaneously guarantee good workload balance. Furthermore, empirical results on several large power-law graphs also show that DBH can outperform the state of the art. 1 Introduction Recent years have witnessed the emergence of big graphs in a large variety of real applications, such as the web and social network services. Furthermore, many machine learning and data mining algorithms can also be modeled with graphs [13]. Hence, machine learning based on distributed graph-computing (DGC) frameworks has attracted much attention from big data machine learning community [13, 15, 14, 6, 11, 7]. To perform distributed (parallel) graph-computing on clusters with several machines (servers), one has to partition the whole graph across the machines in a cluster. Graph partitioning (GP) can dramatically affect the performance of DGC frameworks in terms of workload balance and communication cost. Hence, the GP strategy typically plays a key role in DGC frameworks. The ideal GP method should minimize the cross-machine communication cost, and simultaneously keep the workload in every machine approximately balanced. 1 Existing GP methods can be divided into two main categories: edge-cut and vertex-cut methods. Edge-cut tries to evenly assign the vertices to machines by cutting the edges. In contrast, vertex-cut tries to evenly assign the edges to machines by cutting the vertices. Figure 1 illustrates the edgecut and vertex-cut partitioning results of an example graph. In Figure 1 (a), the edges (A,C) and (A,E) are cut, and the two machines store the vertex sets {A,B,D} and {C,E}, respectively. In Figure 1 (b), the vertex A is cut, and the two machines store the edge sets {(A,B), (A,D), (B,D)} and {(A,C), (A,E), (C,E)}, respectively. In edge-cut, both machines of a cut edge should maintain a ghost (local replica) of the vertex and the edge data. In vertex-cut, all the machines associated with a cut vertex should maintain a mirror (local replica) of the vertex. The ghosts and mirrors are shown in shaded vertices in Figure 1. In edge-cut, the workload of a machine is determined by the number of vertices located in that machine, and the communication cost of the whole graph is determined by the number of edges spanning different machines. In vertex-cut, the workload of a machine is determined by the number of edges located in that machine, and the communication cost of the whole graph is determined by the number of mirrors of the vertices. (a) Edge-Cut (b) Vertex-Cut Figure 1: Two strategies for graph partitioning. Shaded vertices are ghosts and mirrors, respectively. Most traditional DGC frameworks, such as GraphLab [13] and Pregel [15], use edge-cut methods [9, 18, 19, 20] for GP. Very recently, the authors of PowerGraph [6] find that the vertex-cut methods can achieve better performance than edge-cut methods, especially for power-law graphs. Hence, vertex-cut has attracted more and more attention from DGC research community. For example, PowerGraph [6] adopts a random vertex-cut method and two greedy variants for GP. GraphBuilder [8] provides some heuristics, such as the grid-based constrained solution, to improve the random vertex-cut method. Large natural graphs usually follow skewed degree distributions like power-law distributions, which makes GP challenging. Different vertex-cut methods can result in different performance for powerlaw graphs. For example, Figure 2 (a) shows a toy power-law graph with only one vertex having much higher degree than the others. Figure 2 (b) shows a partitioning strategy by cutting the vertices {E, F, A, C, D}, and Figure 2 (c) shows a partitioning strategy by cutting the vertices {A, E}. We can find that the partitioning strategy in Figure 2 (c) is better than that in Figure 2 (b) because the number of mirrors in Figure 2 (c) is smaller which means less communication cost. The intuition underlying this example is that cutting higher-degree vertices can result in fewer mirror vertices. Hence, the power-law degree distribution can be used to facilitate GP. Unfortunately, existing vertexcut methods, including those in PowerGraph and GraphBuilder, make rarely use of the power-law degree distribution for GP. Hence, they cannot achieve satisfactory performance in natural powerlaw graphs. PowerLyra [4] tries to combine both edge-cut and vertex-cut together by using the power-law degree distribution. However, it is lack of theoretical guarantee. (a) Sample (b) Bad partitioning (c) Good partitioning Figure 2: Partition a sample graph with vertex-cut. 2 In this paper, we propose a novel vertex-cut GP method, called degree-based hashing (DBH), for distributed power-law graph computing. The main contributions of DBH are briefly outlined as follows: • DBH can effectively exploit the power-law degree distributions in natural graphs for vertexcut GP. • Theoretical bounds on the communication cost and workload balance for DBH can be derived, which show that DBH can achieve lower communication cost than existing methods and can simultaneously guarantee good workload balance. • DBH can be implemented as an execution engine for PowerGraph [6], and hence all PowerGraph applications can be seamlessly supported by DBH. • Empirical results on several large real graphs and synthetic graphs show that DBH can outperform the state-of-the-art methods. 2 Problem Formulation Let G = (V, E) denote a graph, where V = {v1, v2, . . . , vn} is the set of vertices and E ⊆V × V is the set of edges in G. Let |V | denote the cardinality of the set V , and hence |V | = n. vi and vj are called neighbors if (vi, vj) ∈E. The degree of vi is denoted as di, which measures the number of neighbors of vi. Please note that we only need to consider the GP task for undirected graphs because the workload mainly depends on the number of edges no matter directed or undirected graphs the computation is based on. Even if the computation is based on directed graphs, we can also use the undirected counterparts of the directed graphs to get the partitioning results. Assume we have a cluster of p machines. Vertex-cut GP is to assign each edge with the two corresponding vertices to one of the p machines in the cluster. The assignment of an edge is unique, while vertices may have replicas across different machines. For DGC frameworks based on vertex-cut GP, the workload (amount of computation) of a machine is roughly linear in the number of edges located in that machine, and the replicas of the vertices incur communication for synchronization. So the goal of vertex-cut GP is to minimize the number of replicas and simultaneously balance the number of edges on each machine. Let M(e) ∈{1, . . . , p} be the machine edge e ∈E is assigned to, and A(v) ⊆{1, . . . , p} be the span of vertex v over different machines. Hence, |A(v)| is the number of replicas of v among different machines. Similar to PowerGraph [6], one of the replicas of a vertex is chosen as the master and the others are treated as the mirrors of the master. We let Master(v) denote the machine in which the master of v is located. Hence, the goal of vertex-cut GP can be formulated as follows: min A 1 n n X i=1 |A(vi)| s.t. max m |{e ∈E | M(e) = m}| < λ|E| p , and max m |{v ∈V | Master(v) = m}| < ρn p , where m ∈{1, . . . , p} denotes a machine, λ ≥1 and ρ ≥1 are imbalance factors. We define 1 n nP i=1 |A(vi)| as replication factor, p |E| max m |{e ∈E | M(e) = m}| as edge-imbalance, and p n max m |{v ∈V | Master(v) = m}| as vertex-imbalance. To get a good balance of workload, λ and ρ should be as small as possible. The degrees of natural graphs usually follow skewed power-law distributions [3, 1]: Pr(d) ∝d−α, where Pr(d) is the probability that a vertex has degree d and the power parameter α is a positive constant. The lower the α is, the more skewed a graph will be. This power-law degree distribution makes GP challenging [6]. Although vertex-cut methods can achieve better performance than edge-cut methods for power-law graphs [6], existing vertex-cut methods, such as random method in PowerGraph and grid-based method in GraphBuilder [8], cannot make effective use of the powerlaw distribution to achieve satisfactory performance. 3 3 Degree-Based Hashing for GP In this section, we propose a novel vertex-cut method, called degree-based hashing (DBH), to effectively exploit the power-law distribution for GP. 3.1 Hashing Model We refer to a certain machine by its index idx, and the idxth machine is denoted as Pidx. We first define two kinds of hash functions: vertex-hash function idx = vertex hash(v) which hashes vertex v to the machine Pidx, and edge-hash function idx = edge hash(e) or idx = edge hash(vi, vj) which hashes edge e = (vi, vj) to the machine Pidx. Our hashing model includes two main components: • Master-vertex assignment: The master replica of vi is uniquely assigned to one of the p machines with equal probability for each machine by some randomized hash function vertex hash(vi). • Edge assignment: Each edge e = (vi, vj) is assigned to one of the p machines by some hash function edge hash(vi, vj). It is easy to find that the above hashing model is a vertex-cut GP method. The master-vertex assignment can be easily implemented, which can also be expected to achieve a low vertex-imbalance score. On the contrary, the edge assignment is much more complicated. Different edge-hash functions can achieve different replication factors and different edge-imbalance scores. Please note that replication factor reflects communication cost, and edge-imbalance reflects workload-imbalance. Hence, the key of our hashing model lies in the edge-hash function edge hash(vi, vj). 3.2 Degree-Based Hashing From the example in Figure 2, we observe that in power-law graphs the replication factor, which is defined as the total number of replicas divided by the total number of vertices, will be smaller if we cut vertices with relatively higher degrees. Based on this intuition, we define the edge hash(vi, vj) as follows: edge hash(vi, vj) = vertex hash(vi) if di < dj, vertex hash(vj) otherwise. (1) It means that we use the vertex-hash function to define the edge-hash function. Furthermore, the edge-hash function value of an edge is determined by the degrees of the two associated vertices. More specifically, the edge-hash function value of an edge is defined by the vertex-hash function value of the associated vertex with a smaller degree. Hence, our method is called degree-based hashing (DBH). DBH can effectively capture the intuition that cutting vertices with higher degrees will get better performance. Our DBH method for vertex-cut GP is briefly summarized in Algorithm 1, where [n] = {1, . . . , n}. Algorithm 1 Degree-based hashing (DBH) for vertex-cut GP Input: The set of edges E; the set of vertices V ; the number of machines p. Output: The assignment M(e) ∈[p] for each edge e. 1: Initialization: count the degree di for each i ∈[n] in parallel 2: for all e = (vi, vj) ∈E do 3: Hash each edge in parallel: 4: if di < dj then 5: M(e) ←vertex hash(vi) 6: else 7: M(e) ←vertex hash(vj) 8: end if 9: end for 4 4 Theoretical Analysis In this section, we present theoretical analysis for our DBH method. For comparison, the random vertex-cut method (called Random) of PowerGraph [6] and the grid-based constrained solution (called Grid) of GraphBuilder [8] are adopted as baselines. Our analysis is based on randomization. Moreover, we assume that the graph is undirected and there are no duplicated edges in the graph. We mainly study the performance in terms of replication factor, edge-imbalance and verteximbalance defined in Section 2. Due to space limitation, we put the proofs of all theoretical results into the supplementary material. 4.1 Partitioning Degree-fixed Graphs Firstly, we assume that the degree sequence {di}n i=1 is fixed. Then we can get the following expected replication factor produced by different methods. Random assigns each edge evenly to the p machines via a randomized hash function. The result can be directly got from PowerGraph [6]. Lemma 1. Assume that we have a sequence of n vertices {vi}n i=1 and the corresponding degree sequence D = {di}n i=1. A simple randomized vertex-cut on p machines has the expected replication factor: E " 1 n n X i=1 |A(vi)| D # = p n n X i=1 1 − 1 −1 p di . By using the Grid hash function, each vertex has √p rather than p candidate machines compared to Random. Thus we simply replace p with √p to get the following corollary. Corollary 1. By using Grid for hashing, the expected replication factor on p machines is: E " 1 n n X i=1 |A(vi)| D # = √p n n X i=1 1 − 1 −1 √p di . Using DBH method in Section 3.2, we obtain the following result by fixing the sequence {hi}n i=1, where hi is defined as the number of vi’s adjacent edges which are hashed by the neighbors of vi according to the edge-hash function defined in (1). Theorem 1. Assume that we have a sequence of n vertices {vi}n i=1 and the corresponding degree sequence D = {di}n i=1. For each vi, di −hi adjacent edges of it are hashed by vi itself. Define H = {hi}n i=1. Our DBH method on p machines has the expected replication factor: E " 1 n n X i=1 |A(vi)| H, D # = p n n X i=1 1 − 1 −1 p hi+1 ≤p n n X i=1 1 − 1 −1 p di , where hi ≤di −1 for any vi. This theorem says that our DBH method has smaller expected replication factor than Random of PowerGraph [6]. Next we turn to the analysis of the balance constraints. We still fix the degree sequence and have the following result for our DBH method. Theorem 2. Our DBH method on p machines with the sequences {vi}n i=1, {di}n i=1 and {hi}n i=1 defined in Theorem 1 has the edge-imbalance: max m |{e ∈E | M(e) = m}| |E|/p = nP i=1 hi p + max j∈[p] P vi∈Pj (di −hi) 2|E|/p . Although the master vertices are evenly assigned to each machine, we want to show how the randomized assignment is close to the perfect balance. This problem is well studied in the model of uniformly throwing n balls into p bins when n ≫p(ln p)3 [17]. 5 Lemma 2. The maximum number of master vertices for each machine is bounded as follows: Pr[MaxLoad > ka] = o(1) if a > 1, Pr[MaxLoad > ka] = 1 −o(1) if 0 < a < 1. Here MaxLoad = max m |{v ∈V | Master(v) = m}|, and ka = n p + r 2n ln p p 1 −ln ln p 2a ln p . 4.2 Partitioning Power-law Graphs Now we change the sequence of fixed degrees into a sequence of random samples generated from the power-law distribution. As a result, upper-bounds can be provided for the above three methods, which are Random, Grid and DBH. Theorem 3. Let the minimal degree be dmin and each d ∈{di}n i=1 be sampled from a power-law degree distribution with parameter α ∈(2, 3). The expected replication factor of Random on p machines can be approximately bounded by: ED " p n n X i=1 1 − 1 −1 p di# ≤p 1 − 1 −1 p ˆΩ , where ˆΩ= dmin × α−1 α−2. This theorem says that when the degree sequence is under power-law distribution, the upper bound of the expected replication factor increases as α decreases. This implies that Random yields a worse partitioning when the power-law graph is more skewed. Like Corollary 1, we replace p with √p to get the similar result for Grid. Corollary 2. By using Grid method, the expected replication factor on p machines can be approximately bounded by: ED "√p n n X i=1 1 − 1 −1 √p di# ≤√p 1 − 1 −1 √p ˆΩ , where ˆΩ= dmin × α−1 α−2. Note that √p 1 − 1 − 1 √p ˆΩ ≤p 1 − 1 −1 p ˆΩ . So Corollary 2 tells us that Grid can reduce the replication factor but it is not motivated by the skewness of the degree distribution. Theorem 4. Assume each edge is hashed by our DBH method and hi ≤di −1 for any vi. The expected replication factor of DBH on p machines can be approximately bounded by: EH,D " p n n X i=1 1 − 1 −1 p hi+1# ≤p 1 − 1 −1 p ˆ Ω′ , where ˆΩ′ = dmin × α−1 α−2 −dmin × α−1 2α−3 + 1 2. Note that p 1 − 1 −1 p ˆ Ω′ < p 1 − 1 −1 p ˆΩ . Therefore, our DBH method can expectedly reduce the replication factor. The term α−1 2α−3 increases as α decreases, which means our DBH reduces more replication factor when the power-law graph is more skewed. Note that Grid and our DBH method actually use two different ways to reduce the replication factor. Grid reduces more replication factor when p grows. These two approaches can be combined to obtain further improvement, which is not the focus of this paper. Finally, we show that our DBH methd also guarantees good edge-balance (workload balance) under power-law distributions. 6 Theorem 5. Assume each edge is hashed by the DBH method with dmin, {vi}n i=1, {di}n i=1 and {hi}n i=1 defined above. The vertices are evenly assigned. By taking the constant 2|E|/p = ED nP i=1 di = nED [d] /p, there exists ϵ ∈(0, 1) such that the expected edge-imbalance of DBH on p machines can be bounded w.h.p (with high probability). That is, EH,D n X i=1 hi p + max j∈[p] X vi∈Pj (di −hi) ≤(1 + ϵ)2|E| p . Note that any ϵ that satisfies 1/ϵ ≪n/p could work for this theorem, which results in a tighter bound for large n. Therefore, together with Theorem 4, this theorem shows that our DBH method can reduce the replication factor and simultaneously guarantee good workload balance. 5 Empirical Evaluation In this section, empirical evaluation on real and synthetic graphs is used to verify the effectiveness of our DBH method. The cluster for experiment contains 64 machines connected via 1 GB Ethernet. Each machine has 24 Intel Xeon cores and 96GB of RAM. 5.1 Datasets The graph datasets used in our experiments include both synthetic and real-world power-law graphs. Each synthetic power-law graph is generated by a combination of two synthetic directed graphs. The in-degree and out-degree of the two directed graphs are sampled from the power-law degree distributions with different power parameters α and β, respectively. Such a collection of synthetic graphs is separated into two subsets: one subset with parameter α ≥β which is shown in Table 1(a), and the other subset with parameter α < β which is shown in Table 1(b). The real-world graphs are shown in Table 1(c). Some of the real-world graphs are the same as those in the experiment of PowerGraph. And some additional real-world graphs are from the UF Sparse Matrices Collection [5]. Table 1: Datasets (a) Synthetic graphs: α ≥β Alias α β |E| S1 2.2 2.2 71,334,974 S2 2.2 2.1 88,305,754 S3 2.2 2.0 134,881,233 S4 2.2 1.9 273,569,812 S5 2.1 2.1 103,838,645 S6 2.1 2.0 164,602,848 S7 2.1 1.9 280,516,909 S8 2.0 2.0 208,555,632 S9 2.0 1.9 310,763,862 (b) Synthetic graphs: α < β Alias α β |E| S10 2.1 2.2 88,617,300 S11 2.0 2.2 135,998,503 S12 2.0 2.1 145,307,486 S13 1.9 2.2 280,090,594 S14 1.9 2.1 289,002,621 S15 1.9 2.0 327,718,498 (c) Real-world graphs Alias Graph |V | |E| Tw Twitter [10] 42M 1.47B Arab Arabic-2005 [5] 22M 0.6B Wiki Wiki [2] 5.7M 130M LJ LiveJournal [16] 5.4M 79M WG WebGoogle [12] 0.9M 5.1M 5.2 Baselines and Evaluation Metric In our experiment, we adopt the Random of PowerGraph [6] and the Grid of GraphBuilder [8]1 as baselines for empirical comparison. The method Hybrid of PowerLyra [4] is not adopted for comparison because it combines both edge-cut and vertex-cut which is not a pure vertex-cut method. One important metric is the replication factor, which reflects the communication cost. To test the speedup for real applications, we use the total execution time for PageRank which is forced to take 100 iterations. The speedup is defined as: speedup = 100% × (γAlg −γDBH)/γAlg, where γAlg is the execution time of PageRank with the method Alg. Here, Alg can be Random or Grid. Because all the methods can achieve good workload balance in our experiments, we do not report it here. 1GraphLab 2.2 released in July 2013 has used PowerGraph as its engine, and the Grid GP method has been adopted by GraphLab 2.2 to replace the original Random GP method. Detailed information can be found at: http://graphlab.org/projects/index.html 7 5.3 Results Figure 3 shows the replication factor on two subsets of synthetic graphs. We can find that our DBH method achieves much lower replication factor than Random and Grid. The replication factor of DBH is reduced by up to 80% compared to Random and 60% compared to Grid. S1 S2 S3 S4 S5 S6 S7 S8 S9 0 5 10 15 20 25 30 Replication Factor 1+10−12 Random Grid DBH (a) Replication Factor S10 S11 S12 S13 S14 S15 0 5 10 15 20 25 30 Replication Factor 1+10−12 Random Grid DBH (b) Replication Factor Figure 3: Experiments on two subsets of synthetic graphs. The X-axis denotes different datasets in Table 1(a) and Table 1(b). The number of machines is 48. Figure 4 (a) shows the replication factor on the real-world graphs. We can also find that DBH achieves the best performance. Figure 4 (b) shows that the relative speedup of DBH is up to 60% over Random and 25% over Grid on the PageRank computation. WG LJ Wiki Arab Tw 0 2 4 6 8 10 12 14 16 18 Replication Factor 1+10−12 Random Grid DBH (a) Replication Factor WG LJ Wiki Arab Tw 0 10 20 30 40 50 60 70 Speedup(%) 26.5% 8.42% 21.2% 4.28% 23.6% 6.06% 31.5% 13.3% 60.6% 25% 1+10−12 Random Grid (b) Execution Speedup Figure 4: Experiments on real-world graphs. The number of machines is 48. Figure 5 shows the replication factor and execution time for PageRank on Twitter graph when the number of machines ranges from 8 to 64. We can find our DBH achieves the best performance for all cases. 8 16 24 48 64 0 2 4 6 8 10 12 14 16 18 20 1+10−12 Replication Factor Number of Machines Random Grid DBH (a) Replication Factor 8 16 24 48 64 200 400 600 800 1000 1200 1400 1600 1800 2000 1+10−12 Execution Time (Sec) Number of Machines Random Grid DBH (b) Execution Time Figure 5: Experiments on Twitter graph. The number of machines ranges from 8 to 64. 6 Conclusion In this paper, we have proposed a new vertex-cut graph partitioning method called degree-based hashing (DBH) for distributed graph-computing frameworks. DBH can effectively exploit the power-law degree distributions in natural graphs to achieve promising performance. Both theoretical and empirical results show that DBH can outperform the state-of-the-art methods. In our future work, we will apply DBH to more big data machine learning tasks. 7 Acknowledgements This work is supported by the NSFC (No. 61100125, No. 61472182), the 863 Program of China (No. 2012AA011003), and the Fundamental Research Funds for the Central Universities. 8 References [1] Lada A Adamic and Bernardo A Huberman. Zipf’s law and the internet. Glottometrics, 3(1):143–150, 2002. [2] Paolo Boldi and Sebastiano Vigna. The webgraph framework I: compression techniques. In Proceedings of the 13th international conference on World Wide Web (WWW), 2004. [3] Andrei Broder, Ravi Kumar, Farzin Maghoul, Prabhakar Raghavan, Sridhar Rajagopalan, Raymie Stata, Andrew Tomkins, and Janet Wiener. Graph structure in the web. Computer networks, 33(1):309–320, 2000. [4] Rong Chen, Jiaxin Shi, Yanzhe Chen, Haibing Guan, and Haibo Chen. Powerlyra: Differentiated graph computation and partitioning on skewed graphs. Technical Report IPADSTR-2013-001, Institute of Parallel and Distributed Systems, Shanghai Jiao Tong University, 2013. [5] Timothy A Davis and Yifan Hu. The University of Florida sparse matrix collection. ACM Transactions on Mathematical Software, 38(1):1, 2011. [6] Joseph E Gonzalez, Yucheng Low, Haijie Gu, Danny Bickson, and Carlos Guestrin. Powergraph: Distributed graph-parallel computation on natural graphs. In Proceedings of the 10th USENIX Symposium on Operating Systems Design and Implementation (OSDI), 2012. [7] Joseph E. Gonzalez, Reynold S. Xin, Ankur Dave, Daniel Crankshaw, Michael J. Franklin, and Ion Stoica. GraphX: Graph processing in a distributed dataflow framework. In Proceedings of the 11th USENIX Symposium on Operating Systems Design and Implementation (OSDI), 2014. [8] Nilesh Jain, Guangdeng Liao, and Theodore L Willke. Graphbuilder: scalable graph etl framework. In Proceedings of the First International Workshop on Graph Data Management Experiences and Systems, 2013. [9] George Karypis and Vipin Kumar. Multilevel graph partitioning schemes. In Proceedings of the International Conference on Parallel Processing (ICPP), 1995. [10] Haewoon Kwak, Changhyun Lee, Hosung Park, and Sue Moon. What is twitter, a social network or a news media. In Proceedings of the 19th international conference on World Wide Web (WWW), 2010. [11] Aapo Kyrola, Guy E. Blelloch, and Carlos Guestrin. Graphchi: Large-scale graph computation on just a PC. In Proceedings of the 10th USENIX Symposium on Operating Systems Design and Implementation (OSDI), 2012. [12] Jure Leskovec. Stanford large network dataset collection. URL http://snap. stanford. edu/data/index. html, 2011. [13] Yucheng Low, Joseph Gonzalez, Aapo Kyrola, Danny Bickson, Carlos Guestrin, and Joseph M. Hellerstein. GraphLab: A new framework for parallel machine learning. In Proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI), 2010. [14] Yucheng Low, Joseph Gonzalez, Aapo Kyrola, Danny Bickson, Carlos Guestrin, and Joseph M. Hellerstein. Distributed graphlab: A framework for machine learning in the cloud. In Proceedings of the International Conference on Very Large Data Bases (VLDB), 2012. [15] Grzegorz Malewicz, Matthew H Austern, Aart JC Bik, James C Dehnert, Ilan Horn, Naty Leiser, and Grzegorz Czajkowski. Pregel: a system for large-scale graph processing. In Proceedings of the ACM SIGMOD International Conference on Management of Data (SIGMOD), 2010. [16] Alan Mislove, Massimiliano Marcon, Krishna P Gummadi, Peter Druschel, and Bobby Bhattacharjee. Measurement and analysis of online social networks. In Proceedings of the 7th ACM SIGCOMM conference on Internet Measurement, 2007. [17] Martin Raab and Angelika Steger. balls into binsa simple and tight analysis. In Randomization and Approximation Techniques in Computer Science, pages 159–170. Springer, 1998. [18] Isabelle Stanton and Gabriel Kliot. Streaming graph partitioning for large distributed graphs. In Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), 2012. [19] Charalampos Tsourakakis, Christos Gkantsidis, Bozidar Radunovic, and Milan Vojnovic. Fennel: Streaming graph partitioning for massive scale graphs. In Proceedings of the 7th ACM International Conference on Web Search and Data Mining (WSDM), 2014. [20] Lu Wang, Yanghua Xiao, Bin Shao, and Haixun Wang. How to partition a billion-node graph. In Proceedings of the International Conference on Data Engineering (ICDE), 2014. 9
|
2014
|
155
|
5,242
|
Parallel Successive Convex Approximation for Nonsmooth Nonconvex Optimization Meisam Razaviyayn∗ meisamr@stanford.edu Mingyi Hong† mingyi@iastate.edu Zhi-Quan Luo‡ luozq@umn.edu Jong-Shi Pang§ jongship@usc.edu Abstract Consider the problem of minimizing the sum of a smooth (possibly non-convex) and a convex (possibly nonsmooth) function involving a large number of variables. A popular approach to solve this problem is the block coordinate descent (BCD) method whereby at each iteration only one variable block is updated while the remaining variables are held fixed. With the recent advances in the developments of the multi-core parallel processing technology, it is desirable to parallelize the BCD method by allowing multiple blocks to be updated simultaneously at each iteration of the algorithm. In this work, we propose an inexact parallel BCD approach where at each iteration, a subset of the variables is updated in parallel by minimizing convex approximations of the original objective function. We investigate the convergence of this parallel BCD method for both randomized and cyclic variable selection rules. We analyze the asymptotic and non-asymptotic convergence behavior of the algorithm for both convex and non-convex objective functions. The numerical experiments suggest that for a special case of Lasso minimization problem, the cyclic block selection rule can outperform the randomized rule. 1 Introduction Consider the following optimization problem min x h(x) ≜f(x1, . . . , xn) + n X i=1 gi(xi) s.t. xi ∈Xi, i = 1, 2, . . . , n, (1) where Xi ⊆Rmi is a closed convex set; the function f : Qn i=1 Xi →R is a smooth function (possibly non-convex); and g(x) ≜Pn i=1 gi(xi) is a separable convex function (possibly nonsmooth). The above optimization problem appears in various fields such as machine learning, signal processing, wireless communication, image processing, social networks, and bioinformatics, to name just a few. These optimization problems are typically of huge size and should be solved expeditiously. A popular approach for solving the above multi-block optimization problem is the block coordinate descent (BCD) approach, where at each iteration of BCD, only one of the block variables is updated, while the remaining blocks are held fixed. Since only one block is updated at each iteration, the periteration storage and computational demand of the algorithm is low, which is desirable in huge-size problems. Furthermore, as observed in [1–3], these methods perform particulary well in practice. ∗Electrical Engineering Department, Stanford University †Industrial and Manufacturing Systems Engineering, Iowa State University ‡Department of Electrical and Computer Engineering, University of Minnesota §Department of Industrial and Systems Engineering, University of Southern California 1 The availability of high performance multi-core computing platforms makes it increasingly desirable to develop parallel optimization methods. One category of such parallelizable methods is the (proximal) gradient methods. These methods are parallelizable in nature [4–8]; however, they are equivalent to successive minimization of a quadratic approximation of the objective function which may not be tight; and hence suffer from low convergence speed in some practical applications [9]. To take advantage of the BCD method and parallel multi-core technology, different parallel BCD algorithms have been recently proposed in the literature. In particular, the references [10–12] propose parallel coordinate descent minimization methods for ℓ1-regularized convex optimization problems. Using the greedy (Gauss-Southwell) update rule, the recent works [9,13] propose parallel BCD type methods for general composite optimization problems. In contrast, references [2, 14–20] suggest randomized block selection rule, which is more amenable to big data optimization problems, in order to parallelize the BCD method. Motivated by [1,9,15,21], we propose a parallel inexact BCD method where at each iteration of the algorithm, a subset of the blocks is updated by minimizing locally tight approximations of the objective function. Asymptotic and non-asymptotic convergence analysis of the algorithm is presented in both convex and non-convex cases for different variable block selection rules. The proposed parallel algorithm is synchronous, which is different than the existing lock-free methods in [22,23]. The contributions of this work are as follows: • A parallel block coordinate descent method is proposed for non-convex nonsmooth problems. To the best of our knowledge, reference [9] is the only paper in the literature that focuses on parallelizing BCD for non-convex nonsmooth problems. This reference utilizes greedy block selection rule which requires search among all blocks as well as communication among processing nodes in order to find the best blocks to update. This requirement can be demanding in practical scenarios where the communication among nodes are costly or when the number of blocks is huge. In fact, this high computational cost motivated the authors of [9] to develop further inexact update strategies to efficiently alleviating the high computational cost of the greedy block selection rule. • The proposed parallel BCD algorithm allows both cyclic and randomized block variable selection rules. The deterministic (cyclic) update rule is different than the existing parallel randomized or greedy BCD methods in the literature; see, e.g., [2,9,13–20]. Based on our numerical experiments, this update rule is beneficial in solving the Lasso problem. • The proposed method not only works with the constant step-size selection rule, but also with the diminishing step-sizes which is desirable when the Lipschitz constant of the objective function is not known. • Unlike many existing algorithms in the literature, e.g. [13–15], our parallel BCD algorithm utilizes the general approximation of the original function which includes the linear/proximal approximation of the objective as a special case. The use of general approximation instead of the linear/proximal approximation offers more flexibility and results in efficient algorithms for particular practical problems; see [21,24] for specific examples. • We present an iteration complexity analysis of the algorithm for both convex and nonconvex scenarios. Unlike the existing non-convex parallel methods in the literature such as [9] which only guarantee the asymptotic behavior of the algorithm, we provide nonasymptotic guarantees on the convergence of the algorithm as well. 2 Parallel Successive Convex Approximation As stated in the introduction section, a popular approach for solving (1) is the BCD method where at iteration r+1 of the algorithm, the block variable xi is updated by solving the following subproblem xr+1 i = arg min xi∈Xi h(xr 1, . . . , xr i−1, xi, xr i+1, . . . , xr n). (2) In many practical problems, the update rule (2) is not in closed form and hence not computationally cheap. One popular approach is to replace the function h(·) with a well-chosen local convex 2 approximation ehi(xi, xr) in (2). That is, at iteration r + 1, the block variable xi is updated by xr+1 i = arg min xi∈Xi ehi(xi, xr), (3) where ehi(xi, xr) is a convex (possibly upper-bound) approximation of the function h(·) with respect to the i-th block around the current iteration xr. This approach, also known as block successive convex approximation or block successive upper-bound minimization [21], has been widely used in different applications; see [21,24] for more details and different useful approximation functions. In this work, we assume that the approximation function ehi(·, ·) is of the following form: ehi(xi, y) = efi(xi, y) + gi(xi). (4) Here efi(·, y) is an approximation of the function f(·) around the point y with respect to the i-th block. We further assume that efi(xi, y) : Xi × X →R satisfies the following assumptions: • efi(·, y) is continuously differentiable and uniformly strongly convex with parameter τ, i.e., efi(xi, y) ≥efi(x′ i, y) + ⟨∇xi efi(x′ i, y), xi −x′ i⟩+ τ 2∥xi −x′ i∥2, ∀xi, x′ i ∈Xi, ∀y ∈X • Gradient consistency assumption: ∇xi efi(xi, x) = ∇xif(x), ∀x ∈X • ∇xi efi(xi, ·) is Lipschitz continuous on X for all xi ∈ Xi with constant eL, i.e., ∥∇xi efi(xi, y) −∇xi efi(xi, z)∥≤eL∥y −z∥, ∀y, z ∈X, ∀xi ∈Xi, ∀i. For instance, the following traditional proximal/quadratic approximations of f(·) satisfy the above assumptions when the feasible set is compact and f(·) is twice continuously differentiable: • ef(xi, y) = ⟨∇yif(y), xi −yi⟩+ α 2 ∥xi −yi∥2. • ef(xi, y) = f(xi, y−i) + α 2 ∥xi −yi∥2, for α large enough. For other practical useful approximations of f(·) and the stochastic/incremental counterparts, see [21,25,26]. With the recent advances in the development of parallel processing machines, it is desirable to take the advantage of multi-core machines by updating multiple blocks simultaneously in (3). Unfortunately, naively updating multiple blocks simultaneously using the approach (3) does not result in a convergent algorithm. Hence, we suggest to modify the update rule by using a well-chosen step-size. More precisely, we propose Algorithm 1 for solving the optimization problem (1). Algorithm 1 Parallel Successive Convex Approximation (PSCA) Algorithm find a feasible point x0 ∈X and set r = 0 for r = 0, 1, 2, . . . do choose a subset Sr ⊆{1, . . . , n} calculate bxr i = arg minxi∈Xi ehi(xi, xr), ∀i ∈Sr set xr+1 i = xr i + γr(bxr i −xr i ), ∀i ∈Sr, and set xr+1 i = xr i , ∀i /∈Sr end for The procedure of selecting the subset Sr is intentionally left unspecified in Algorithm 1. This selection could be based on different rules. Reference [9] suggests the greedy variable selection rule where at each iteration of the algorithm in [9], the best response of all the variables are calculated and at the end, only the block variables with the largest amount of improvement are updated. A drawback of this approach is the overhead caused by the calculation of all of the best responses at each iteration; this overhead is especially computationally demanding when the size of the problem is huge. In contrast to [9], we suggest the following randomized or cyclic variable selection rules: • Cyclic: Given the partition {T0, . . . , Tm−1} of the set {1, 2, . . . , n} with Ti T Tj = ∅, ∀i ̸= j and Sm−1 ℓ=0 Tℓ= {1, 2, . . . , n}, we say the choice of the variable selection is cyclic if Smr+ℓ= Tℓ, ∀ℓ= 0, 1, . . . , m −1 and ∀r, 3 • Randomized: The variable selection rule is called randomized if at each iteration the variables are chosen randomly from the previous iterations so that Pr(j ∈Sr | xr, xr−1, . . . , x0) = pr j ≥pmin > 0, ∀j = 1, 2, . . . , n, ∀r. 3 Convergence Analysis: Asymptotic Behavior We first make a standard assumption that ∇f(·) is Lipschitz continuous with constant L∇f, i.e., ∥∇f(x) −∇f(y)∥≤L∇f∥x −y∥, and assume that −∞< infx∈X h(x). Let us also define ¯x to be a stationary point of (1) if ∃d ∈ ∂g(¯x) such that ⟨∇f(¯x)+d, x−¯x⟩≥0, ∀x ∈X, i.e., the first order optimality condition is satisfied at the point ¯x. The following lemma will help us to study the asymptotic convergence of the PSCA algorithm. Lemma 1 [9, Lemma 2] Define the mapping bx(·) : X 7→X as bx(y) = (bxi(y))n i=1 with bxi(y) = arg minxi∈Xi ehi(xi, y). Then the mapping bx(·) is Lipschitz continuous with constant bL = √neL τ , i.e., ∥bx(y) −bx(z)∥≤bL∥y −z∥, ∀y, z ∈X. Having derived the above result, we are now ready to state our first result which studies the limiting behavior of the PSCA algorithm. This result is based on the sufficient decrease of the objective function which has been also exploited in [9] for greedy variable selection rule. Theorem 1 Assume γr ∈ (0, 1], P∞ r=1 γr = +∞, and that lim supr→∞γr < ¯γ ≜ min{ τ L∇f , τ τ+eL√n}. Suppose either cyclic or randomized block selection rule is employed. For cyclic update rule, assume further that {γr}∞ r=1 is a monotonically decreasing sequence. Then every limit point of the iterates is a stationary point of (1) – deterministically for cyclic update rule and almost surely for randomized block selection rule. Proof Using the standard sufficient decrease argument (see the supplementary materials), one can show that h(xr+1) ≤h(xr) + γr(−τ + γrL∇f) 2 ∥bxr −xr∥2 Sr. (5) Since lim supr→∞γr < ¯γ, for sufficiently large r, there exists β > 0 such that h(xr+1) ≤h(xr) −βγr∥bxr −xr∥2 Sr. (6) Taking the conditional expectation from both sides implies E[h(xr+1) | xr] ≤h(xr) −βγrE " n X i=1 Rr i ∥bxr i −xr i ∥2 | xr # , (7) where Rr i is a Bernoulli random variable which is one if i ∈Sr and it is zero otherwise. Clearly, E[Rr i | xr] = pr i and therefore, E[h(xr+1) | xr] ≤h(xr) −βγrpmin∥bxr −xr∥2, ∀r. (8) Thus {h(xr)} is a supermartingale with respect to the natural history; and by the supermartingale convergence theorem [27, Proposition 4.2], h(xr) converges and we have ∞ X r=1 γr∥bxr −xr∥2 < ∞, almost surely. (9) Let us now restrict our analysis to the set of probability one for which h(xr) converges and P∞ r=1 γr∥bxr −xr∥2 < ∞. Fix a realization in that set. The equation (9) simply implies that, for the fixed realization, lim infr→∞∥bxr −xr∥= 0, since P r γr = ∞. Next we strengthen this result by proving that limr→∞∥bxr −xr∥= 0. Suppose the contrary that there exists δ > 0 such 4 that ∆r ≜∥bxr −xr∥≥2δ infinitely often. Since lim infr→∞∆r = 0, there exists a subset of indices K and {ir} such that for any r ∈K, ∆r < δ, 2δ < ∆ir, and δ ≤∆j ≤2δ, ∀j = r + 1, . . . , ir −1. (10) Clearly, δ −∆r (i) ≤∆r+1 −∆r = ∥bxr+1 −xr+1∥−∥bxr −xr∥ (ii) ≤∥bxr+1 −bxr∥+ ∥xr+1 −xr∥ (iii) ≤(1 + bL)∥xr+1 −xr∥ (iv) = (1 + bL)γr∥bxr −xr∥≤(1 + bL)γrδ, (11) where (i) and (ii) are due to (10) and the triangle inequality, respectively. The inequality (iii) is the result of Lemma 1; and (iv) is followed from the algorithm iteration update rule. Since lim supr→∞γr < 1 1+bL, the above inequality implies that there exists an α > 0 such that ∆r > α, (12) for all r large enough. Furthermore, since the chosen realization satisfies (9), we have that limr→∞ Pir−1 t=r γt(∆t)2 = 0; which combined with (10) and (12), implies lim r→∞ ir−1 X t=r γt = 0. (13) On the other hand, using the similar reasoning as in above, one can write δ < ∆ir −∆r = ∥bxir −xir∥−∥bxr −xr∥≤∥bxir −bxr∥+ ∥xir −xr∥ ≤(1 + bL) ir−1 X t=r γt∥bxt −xt∥≤2δ(1 + bL) ir−1 X t=r γt, and hence lim infr→∞ Pir−1 t=r γt > 0, which contradicts (13). Therefore the contrary assumption does not hold and we must have limr→∞∥bxr −xr∥= 0, almost surely. Now consider a limit point ¯x with the subsequence {xrj}∞ j=1 converging to ¯x. Using the definition of bxrj, we have limj→∞ehi(bxrj i , xrj) ≤ehi(xi, xrj), ∀xi ∈Xi, ∀i. Therefore, by letting j →∞and using the fact that limr→∞∥bxr −xr∥= 0, almost surely, we obtain ehi(¯xi, ¯x) ≤ehi(xi, ¯x), ∀xi ∈ Xi, ∀i, almost surely; which in turn, using the gradient consistency assumption, implies ⟨∇f(¯x) + d, x −¯x⟩≥0, ∀x ∈X, almost surely, for some d ∈∂g(¯x), which completes the proof for the randomized block selection rule. Now consider the cyclic update rule with a limit point ¯x. Due to the sufficient decrease bound (6), we have limr→∞h(xr) = h(¯x). Furthermore, by taking the summation over (6), we obtain P∞ r=1 γr∥bxr −xr∥2 Sr < ∞. Consider a fixed block i and define {rk}∞ k=1 to be the subsequence of iterations that block i is updated in. Clearly, P∞ k=1 γrk∥bxrk i −xrk i ∥2 < ∞and P∞ k=1 γrk = ∞, since {γr} is monotonically decreasing. Therefore, lim infk→∞∥bxrk i −xrk i ∥= 0. Repeating the above argument with some slight modifications, which are omitted due to lack of space, we can show that limk→∞∥bxrk i −xrk i ∥= 0 implying that the limit point ¯x is a stationary point of (1). ■ Remark 1 Theorem 1 covers both diminishing and constant step-size selection rule; or the combination of the two, i.e., decreasing the step-size until it is less than the constant ¯γ. It is also worth noting that the diminishing step-size rule is especially useful when the knowledge of the problem’s constants L, ˜L, and τ is not available. 4 Convergence Analysis: Iteration Complexity In this section, we present iteration complexity analysis of the algorithm for both convex and nonconvex cases. 5 4.1 Convex Case When the function f(·) is convex, the overall objective function will become convex; and as a result of Theorem 1, if a limit point exists, it is a global minimizer of (1). In this scenario, it is desirable to derive the iteration complexity bounds of the algorithm. Note that our algorithm employs linear combination of the two consecutive points at each iteration and hence it is different than the existing algorithms in [2, 14–20]. Therefore, not only in the cyclic case, but also in the randomized scenario, the iteration complexity analysis of PSCA is different than the existing results and should be investigated. Let us make the following assumptions for our iteration complexity analysis: • The step-size is constant with γr = γ < τ L∇f , ∀r. • The level set {x | h(x) ≤h(x0)} is compact and the next two assumptions hold in this set. • The nonsmooth function g(·) is Lipschitz continuous, i.e., |g(x) −g(y)| ≤Lg∥x − y∥, ∀x, y ∈X. This assumption is satisfied in many practical problems such as (group) Lasso. • The gradient of the approximation function efi(·, y) is uniformly Lipschitz with constant Li, i.e., ∥∇xi efi(xi, y) −∇x′ i efi(x′ i, y)∥≤Li∥xi −x′ i∥, ∀xi, x′ i ∈Xi. Lemma 2 (Sufficient Descent) There exists bβ, eβ > 0, such that for all r ≥1, we have • For randomized rule: E[h(xr+1) | xr] ≤h(xr) −bβ∥bxr −xr∥2. • For cyclic rule: h(xm(r+1)) ≤h(xmr) −eβ∥xm(r+1) −xmr∥2. Proof The above result is an immediate consequence of (6) with bβ ≜βγpmin and eβ ≜β γ . ■ Due to the bounded level set assumption, there must exist constants bQ, Q, R > 0 such that ∥∇f(xr)∥≤Q, ∥∇xi efi(bxr, xr)∥≤bQ, ∥xr −x∗∥≤R, (14) for all xr. Next we use the constants Q, bQ and R to bound the cost-to-go in the algorithm. Lemma 3 (Cost-to-go Estimate) For all r ≥1, we have • For randomized rule: E[h(xr+1) | xr] −h(x∗) 2 ≤2 (Q + Lg)2 + nL2R2 ∥bxr−xr∥2 • For cyclic rule: h(xm(r+1)) −h(x∗) 2 ≤3n θ(1−γ)2 γ2 ∥xm(r+1) −xmr∥2 for any optimal point x∗, where L ≜maxi{Li} and θ ≜L2 g + bQ2 + 2nR2 ˜L2 γ2 (1−γ)2 + 2R2L2. Proof Please see the supplementary materials for the proof. Lemma 2 and Lemma 3 lead to the iteration complexity bound in the following theorem. The proof steps of this result are similar to the ones in [28] and therefore omitted here for space reasons. Theorem 2 Define σ ≜ (γL∇f −τ)γpmin 4((Q+Lg)2+nL2R2) and eσ ≜(γL∇f −τ)γ 6nθ(1−γ)2 . Then • For randomized update rule: E [h(xr)] −h(x∗) ≤max{4σ−2,h(x0)−h(x∗),2} σ 1 r. • For cyclic update rule: h(xmr) −h(x∗) ≤max{4eσ−2,h(x0)−h(x∗),2} eσ 1 r. 6 4.2 Non-convex Case In this subsection we study the iteration complexity of the proposed randomized algorithm for the general nonconvex function f(·) assuming constant step-size selection rule. This analysis is only for the randomized block selection rule. Since in the nonconvex scenario, the iterates may not converge to the global optimum point, the closeness to the optimal solution cannot be considered for the iteration complexity analysis. Instead, inspired by [29] where the size of the gradient of the objective function is used as a measure of optimality, we consider the size of the objective proximal gradient as a measure of optimality. More precisely, we define e∇h(x) = x −arg min y∈X ⟨∇f(x), y −x⟩+ g(y) + 1 2∥y −x∥2 . Clearly, e∇h(x) = 0 when x is a stationary point. Moreover, e∇h(·) coincides with the gradient of the objective if g ≡0 and X = Rn. The following theorem, which studies the decrease rate of ∥e∇h(x)∥, could be viewed as an iteration complexity analysis of the randomized PSCA. Theorem 3 Consider randomized block selection rule. Define Tϵ to be the first time that E[∥e∇h(xr)∥2] ≤ϵ. Then Tϵ ≤κ/ϵ where κ ≜2(L2+2L+2)(h(x0)−h∗) bβ and h∗= minx∈X h(x). Proof To simplify the presentation of the proof, let us define eyr i ≜arg minyi∈Xi⟨∇xif(xr), yi − xr i ⟩+ gi(yi) + 1 2∥yi −xr i ∥2. Clearly, e∇h(xr) = (xr i −eyr i )n i=1. The first order optimality condition of the above optimization problem implies ⟨∇xif(xr) + eyr i −xr i , xi −eyr i ⟩+ gi(xi) −gi(eyr i ) ≥0, ∀xi ∈Xi. (15) Furthermore, based on the definition of bxr i , we have ⟨∇xi efi(bxr i , xr), xi −bxr i ⟩+ gi(xi) −gi(bxr i ) ≥0, ∀xi ∈Xi. (16) Plugging in the points bxr i and eyr i in (15) and (16); and summing up the two equations will yield to ⟨∇xi efi(bxr i , xr) −∇xif(xr) + xr i −eyr i , eyr i −bxr i ⟩≥0. Using the gradient consistency assumption, we can write ⟨∇xi efi(bxr i , xr) −∇xi efi(xr i , xr) + xr i −bxr i + bxr i −eyr i , eyr i −bxr i ⟩≥0, or equivalently, ⟨∇xi efi(bxr i , xr) −∇xi efi(xr i , xr) + xr i −bxr i , eyr i −bxr i ⟩≥∥bxr i −eyr i ∥2. Applying Cauchy-Schwarz and the triangle inequality will yield to ∥∇xi efi(bxr i , xr) −∇xi efi(xr i , xr)∥+ ∥xr i −bxr i ∥ ∥eyr i −bxr i ∥≥∥bxr i −eyr i ∥2. Since the function efi(·, x) is Lipschitz, we must have ∥bxr i −eyr i ∥≤(1 + Li)∥xr i −bxr i ∥ (17) Using the inequality (17), the norm of the proximal gradient of the objective can be bounded by ∥e∇h(xr)∥2 = n X i=1 ∥xr i −eyr i ∥2 ≤2 n X i=1 ∥xr i −bxr i ∥2 + ∥bxr i −eyr i ∥2 ≤2 n X i=1 ∥xr i −bxr i ∥2 + (1 + Li)2∥xr i −bxr i ∥2 ≤2(2 + 2L + L2)∥bxr −xr∥2. Combining the above inequality with the sufficient decrease bound in (7), one can write T X r=0 E h ∥e∇h(xr)∥2i ≤ T X r=1 2(2 + 2L + L2)E ∥bxr −xr∥2 ≤ T X r=0 2(2 + 2L + L2) bβ E h(xr) −h(xr+1) ≤2(2 + 2L + L2) bβ E h(x0) −h(xT +1) ≤2(2 + 2L + L2) bβ h(x0) −h∗ = κ, which implies that Tϵ ≤κ ϵ . ■ 7 5 Numerical Experiments: In this short section, we compare the numerical performance of the proposed algorithm with the classical serial BCD methods. The algorithms are evaluated over the following Lasso problem: min x 1 2∥Ax −b∥2 2 + λ∥x∥1, where the matrix A is generated according to the Nesterov’s approach [5]. Two problem instances are considered: A ∈R2000×10,000 with 1% sparsity level in x∗and A ∈R1000×100,000 with 0.1% sparsity level in x∗. The approximation functions are chosen similar to the numerical experiments in [9], i.e., block size is set to one (mi = 1, ∀i) and the approximation function ef(xi, y) = f(xi, y−i) + α 2 ∥xi −yi∥2 is considered, where f(x) = 1 2∥Ax −b∥2 is the smooth part of the objective function. We choose constant step-size γ and proximal coefficient α. In general, careful selection of the algorithm parameters results in better numerical convergence rate. The smaller values of step-size γ will result in less zigzag behavior for the convergence path of the algorithm; however, too small step sizes will clearly slow down the convergence speed. Furthermore, in order to make the approximation function sufficiently strongly convex, we need to choose α large enough. However, choosing too large α values enforces the next iterates to stay close to the current iterate and results in slower convergence speed; see the supplementary materials for related examples. Figure 1 and Figure 2 illustrate the behavior of cyclic and randomized parallel BCD method as compared with their serial counterparts. The serial methods “Cyclic BCD” and “Randomized BCD” are based on the update rule in (2) with the cyclic and randomized block selection rules, respectively. The variable q shows the number of processors and on each processor we update 40 scalar variables in parallel. As can be seen in Figure 1 and Figure 2, parallelization of the BCD algorithm results in more efficient algorithm. However, the computational gain does not grow linearly with the number of processors. In fact, we can see that after some point, the increase in the number of processors lead to slower convergence. This fact is due to the communication overhead among the processing nodes which dominates the computation time; see the supplementary materials for more numerical experiments on this issue. 0 1 2 3 4 5 6 7 10 −6 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 1 10 2 Time (seconds) Relative Error Cyclic BCD Randomized BCD Cyclic PSCA q=32 Randomized PSCA q= 32 Cyclic PSCA q=8 Randomized PSCA q=8 Cyclic PSCA q=4 Randomized PSCA q=4 Figure 1: Lasso Problem: A ∈R2,000×10,000 0 100 200 300 400 500 600 10 −6 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 1 10 2 Time (seconds) Relative Error Randomized BCD Cyclic BCD Randomized PSCA q=8 Cyclic PSCA q=8 Randomized PSCA q=16 Cyclic PSCA q = 16 Randomized PSCA q=32 Cyclic PSCA q=32 Figure 2: Lasso Problem: A ∈R1,000×100,000 Acknowledgments: The authors are grateful to the University of Minnesota Graduate School Doctoral Dissertation Fellowship and AFOSR, grant number FA9550-12-1-0340 for the support during this research. References [1] Y. Nesterov. Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM Journal on Optimization, 22(2):341–362, 2012. [2] P. Richt´arik and M. Tak´aˇc. Efficient serial and parallel coordinate descent methods for huge-scale truss topology design. In Operations Research Proceedings, pages 27–32. Springer, 2012. 8 [3] Y. T. Lee and A. Sidford. Efficient accelerated coordinate descent methods and faster algorithms for solving linear systems. In 54th Annual Symposium on Foundations of Computer Science (FOCS), pages 147–156. IEEE, 2013. [4] I. Necoara and D. Clipici. Efficient parallel coordinate descent algorithm for convex optimization problems with separable constraints: application to distributed MPC. Journal of Process Control, 23(3):243– 253, 2013. [5] Y. Nesterov. Gradient methods for minimizing composite functions. Mathematical Programming, 140(1):125–161, 2013. [6] P. Tseng and S. Yun. A coordinate gradient descent method for nonsmooth separable minimization. Mathematical Programming, 117(1-2):387–423, 2009. [7] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183–202, 2009. [8] S. J. Wright, R. D. Nowak, and M. Figueiredo. Sparse reconstruction by separable approximation. IEEE Transactions on Signal Processing, 57(7):2479–2493, 2009. [9] F. Facchinei, S. Sagratella, and G. Scutari. Flexible parallel algorithms for big data optimization. arXiv preprint arXiv:1311.2444, 2013. [10] J. K. Bradley, A. Kyrola, D. Bickson, and C. Guestrin. Parallel coordinate descent for ℓ1-regularized loss minimization. arXiv preprint arXiv:1105.5379, 2011. [11] C. Scherrer, A. Tewari, M. Halappanavar, and D. Haglin. Feature clustering for accelerating parallel coordinate descent. In NIPS, pages 28–36, 2012. [12] C. Scherrer, M. Halappanavar, A. Tewari, and D. Haglin. Scaling up coordinate descent algorithms for large ℓ1 regularization problems. arXiv preprint arXiv:1206.6409, 2012. [13] Z. Peng, M. Yan, and W. Yin. Parallel and distributed sparse optimization. preprint, 2013. [14] I. Necoara and D. Clipici. Distributed coordinate descent methods for composite minimization. arXiv preprint arXiv:1312.5302, 2013. [15] P. Richt´arik and M. Tak´aˇc. Parallel coordinate descent methods for big data optimization. arXiv preprint arXiv:1212.0873, 2012. [16] P. Richt´arik and M. Tak´aˇc. On optimal probabilities in stochastic coordinate descent methods. arXiv preprint arXiv:1310.3438, 2013. [17] O. Fercoq and P. Richt´arik. Accelerated, parallel and proximal coordinate descent. arXiv preprint arXiv:1312.5799, 2013. [18] O. Fercoq, Z. Qu, P. Richt´arik, and M. Tak´aˇc. Fast distributed coordinate descent for non-strongly convex losses. arXiv preprint arXiv:1405.5300, 2014. [19] O. Fercoq and P. Richt´arik. Smooth minimization of nonsmooth functions with parallel coordinate descent methods. arXiv preprint arXiv:1309.5885, 2013. [20] A. Patrascu and I. Necoara. A random coordinate descent algorithm for large-scale sparse nonconvex optimization. In European Control Conference (ECC), pages 2789–2794. IEEE, 2013. [21] M. Razaviyayn, M. Hong, and Z.-Q. Luo. A unified convergence analysis of block successive minimization methods for nonsmooth optimization. SIAM Journal on Optimization, 23(2):1126–1153, 2013. [22] F. Niu, B. Recht, C. R´e, and S. J. Wright. Hogwild!: A lock-free approach to parallelizing stochastic gradient descent. Advances in Neural Information Processing Systems, 24:693–701, 2011. [23] J. Liu, S. J. Wright, C. R´e, and V. Bittorf. An asynchronous parallel stochastic coordinate descent algorithm. arXiv preprint arXiv:1311.1873, 2013. [24] J. Mairal. Optimization with first-order surrogate functions. arXiv preprint arXiv:1305.3120, 2013. [25] J. Mairal. Incremental majorization-minimization optimization with application to large-scale machine learning. arXiv preprint arXiv:1402.4419, 2014. [26] M. Razaviyayn, M. Sanjabi, and Z.-Q. Luo. A stochastic successive minimization method for nonsmooth nonconvex optimization with applications to transceiver design in wireless communication networks. arXiv preprint arXiv:1307.4457, 2013. [27] D. P. Bertsekas and J. N. Tsitsiklis. Neuro-dynamic programming. 1996. [28] M. Hong, X. Wang, M. Razaviyayn, and Z.-Q. Luo. Iteration complexity analysis of block coordinate descent methods. arXiv preprint arXiv:1310.6957, 2013. [29] Y. Nesterov. Introductory lectures on convex optimization: A basic course, volume 87. Springer, 2004. 9
|
2014
|
156
|
5,243
|
Active Regression by Stratification Sivan Sabato Department of Computer Science Ben Gurion University, Beer Sheva, Israel sabatos@cs.bgu.ac.il Remi Munos∗ INRIA Lille, France remi.munos@inria.fr Abstract We propose a new active learning algorithm for parametric linear regression with random design. We provide finite sample convergence guarantees for general distributions in the misspecified model. This is the first active learner for this setting that provably can improve over passive learning. Unlike other learning settings (such as classification), in regression the passive learning rate of O(1/ϵ) cannot in general be improved upon. Nonetheless, the so-called ‘constant’ in the rate of convergence, which is characterized by a distribution-dependent risk, can be improved in many cases. For a given distribution, achieving the optimal risk requires prior knowledge of the distribution. Following the stratification technique advocated in Monte-Carlo function integration, our active learner approaches the optimal risk using piecewise constant approximations. 1 Introduction In linear regression, the goal is to predict the real-valued labels of data points in Euclidean space using a linear function. The quality of the predictor is measured by the expected squared error of its predictions. In the standard regression setting with random design, the input is a labeled sample drawn i.i.d. from the joint distribution of data points and labels, and the cost of data is measured by the size of the sample. This model, which we refer to here as passive learning, is useful when both data and labels are costly to obtain. However, in domains where raw data is very cheap to obtain, a more suitable model is that of active learning (see, e.g., Cohn et al., 1994). In this model we assume that random data points are essentially free to obtain, and the learner can choose, for any observed data point, whether to ask also for its label. The cost of data here is the total number of requested labels. In this work we propose a new active learning algorithm for linear regression. We provide finite sample convergence guarantees for general distributions, under a possibly misspecified model. For parametric linear regression, the sample complexity of passive learning as a function of the excess error ϵ is of the order O(1/ϵ). This rate cannot in general be improved by active learning, unlike in the case of classification (Balcan et al., 2009). Nonetheless, the so-called ‘constant’ in this rate of convergence depends on the distribution, and this is where the potential improvement by active learning lies. Finite sample convergence of parametric linear regression in the passive setting has been studied by several (see, e.g., Gy¨orfiet al., 2002; Hsu et al., 2012). The standard approach is Ordinary Least Squares (OLS), where the output predictor is simply the minimizer of the mean squared error on the sample. Recently, a new algorithm for linear regression has been proposed (Hsu and Sabato, 2014). This algorithm obtains an improved convergence guarantee under less restrictive assumptions. An appealing property of this guarantee is that it provides a direct and tight relationship between the point-wise error of the optimal predictor and the convergence rate of the predictor. We exploit this to ∗Current Affiliation: Google DeepMind. 1 allow our active learner to adapt to the underlying distribution. Our approach employs a stratification technique, common in Monte-Carlo function integration (see, e.g., Glasserman, 2004). For any finite partition of the data domain, an optimal oracle risk can be defined, and the convergence rate of our active learner approaches the rate defined by this risk. By constructing an infinite sequence of partitions that become increasingly refined, one can approach the globally optimal oracle risk. Active learning for parametric regression has been investigated in several works, some of them in the context of statistical experimental design. One of the earliest works is Cohn et al. (1996), which proposes an active learning algorithm for locally weighted regression, assuming a well-specified model and an unbiased learning function. Wiens (1998, 2000) calculates a minimax optimal design for regression given the marginal data distribution, assuming that the model is approximately well-specified. Kanamori (2002) and Kanamori and Shimodaira (2003) propose an active learning algorithm that first calculates a maximum likelihood estimator and then uses this estimator to come up with an optimal design. Asymptotic convergence rates are provided under asymptotic normality assumptions. Sugiyama (2006) assumes an approximately well-specified model and i.i.d. label noise, and selects a design from a finite set of possibilities. The approach is adapted to pool-based active learning by Sugiyama and Nakajima (2009). Burbidge et al. (2007) propose an adaptation of Query By Committee. Cai et al. (2013) propose guessing the potential of an example to change the current model. Ganti and Gray (2012) propose a consistent pool-based active learner for the squared loss. A different line of research, which we do not discuss here, focuses on active learning for non-parameteric regression, e.g. Efromovich (2007). Outline In Section 2 the formal setting and preliminaries are introduced. In Section 3 the notion of an oracle risk for a given distribution is presented. The stratification technique is detailed in Section 4. The new active learner algorithm and its analysis are provided in Section 5, with the main result stated in Theorem 5.1. In Section 6 we show via a simple example that in some cases the active learner approaches the maximal possible improvement over passive learning. 2 Setting and Preliminaries We assume a data space in Rd and labels in R. For a distribution P over Rd × R, denote by suppX(P) the support of the marginal of P over Rd. Denote the strictly positive reals by R∗ +. We assume that labeled examples are distributed according to a distribution D. A random labeled example is (X, Y ) ∼D, where X ∈Rd is the example and Y ∈R is the label. Throughout this work, whenever P[·] or E[·] appear without a subscript, they are taken with respect to D. DX is the marginal distribution of X in pairs draws from D. The conditional distribution of Y when the example is X = x is denoted DY |x. The function x 7→DY |x is denoted DY |X. A predictor is a function from Rd to R that predicts a label for every possible example. Linear predictors are functions of the form x 7→x⊤w for some w ∈Rd. The squared loss of w ∈Rd for an example x ∈Rd with a true label y ∈R is ℓ((x, y), w) = (x⊤w −y)2. The expected squared loss of w with respect to D is L(w, D) = E(X,Y )∼D[(X⊤w −Y )2]. The goal of the learner is to find a w such that L(w) is small. The optimal loss achievable by a linear predictor is L⋆(D) = minw∈Rd L(w, D). We denote by w⋆(D) a minimizer of L(w, D) such that L⋆(D) = L(w⋆(D), D). In all these notations the parameter D is dropped when clear from context. In the passive learning setting, the learner draws random i.i.d. pairs (X, Y ) ∼D. The sample complexity of the learner is the number of drawn pairs. In the active learning setting, the learner draws i.i.d. examples X ∼DX. For any drawn example, the learner may draw a label according to the distribution DY |X. The label complexity of the learner is the number of drawn labels. In this setting it is easy to approximate various properties of DX to any accuracy, with zero label cost. Thus we assume for simplicity direct access to some properties of DX, such as the covariance matrix of DX, denoted ΣD = EX∼DX[XX⊤], and expectations of some other functions of X. We assume w.l.o.g. that ΣD is not singular. For a matrix A ∈Rd×d, and x ∈Rd, denote ∥x∥A = √ x⊤Ax. Let R2 D = maxx∈suppX(D) ∥x∥2 Σ−1 D . This is the condition number of the marginal distribution DX. We have E[∥X∥2 Σ−1 D ] = E[tr(X ⊤Σ−1 D X)] = tr(Σ−1 D E[XX ⊤]) = d. (1) 2 Hsu and Sabato (2014) provide a passive learning algorithm for least squares linear regression with a minimax optimal sample complexity (up to logarithmic factors). The algorithm is based on splitting the labeled sample into several subsamples, performing OLS on each of the subsamples, and then choosing one of the resulting predictors via a generalized median procedure. We give here a useful version of the result.1 Theorem 2.1 (Hsu and Sabato, 2014). There are universal constants C, c, c′, c′′ > 0 such that the following holds. Let D be a distribution over Rd×R. There exists an efficient algorithm that accepts as input a confidence δ ∈(0, 1) and a labeled sample of size n drawn i.i.d. from D, and returns ˆw ∈Rd, such that if n ≥cR2 D log(c′n) log(c′′/δ), with probability 1 −δ, L( ˆw, D) −L⋆(D) = ∥w⋆(D) −ˆw∥2 ΣD ≤C log(1/δ) n · ED[∥X∥2 Σ−1 D (Y −X ⊤w⋆(D))2]. (2) This result is particularly useful in the context of active learning, since it provides an explicit dependence on the point-wise errors of the labels, including in heteroscedastic settings, where this error is not uniform. As we see below, in such cases active learning can potentially gain over passive learning. We denote an execution of the algorithm on a labeled sample S by ˆw ←REG(S, δ). The algorithm is used a black box, thus any other algorithm with similar guarantees could be used instead. For instance, similar guarantees might hold for OLS for a more restricted class of distributions. Throughout the analysis we omit for readability details of integer rounding, whenever the effects are negligible. We use the notation O(exp), where exp is a mathematical expression, as a short hand for ¯c · exp + ¯C for some universal constants ¯c, ¯C ≥0, whose values can vary between statements. 3 An Oracle Bound for Active Regression The bound in Theorem 2.1 crucially depends on the input distribution D. In an active learning framework, rejection sampling (Von Neumann, 1951) can be used to simulate random draws of labeled examples according to a different distribution, without additional label costs. By selecting a suitable distribution, it might be possible to improve over Eq. (2). Rejection sampling for regression has been explored in Kanamori (2002); Kanamori and Shimodaira (2003); Sugiyama (2006) and others, mostly in an asymptotic regime. Here we use the explicit bound in Eq. (2) to obtain new finite sample guarantees that hold for general distributions. Let φ : Rd →R∗ + be a strictly positive weight function such that E[φ(X)] = 1. We define the distribution Pφ over Rd × R as follows: For x ∈Rd, y ∈R, let Γφ(x, y) = {(˜x, ˜y) ∈Rd × R | x = ˜x √ φ(˜x), y = ˜y √ φ(˜x)}, and define Pφ by ∀(X, Y ) ∈Rd × R, Pφ(X, Y ) = Z ( ˜X, ˜Y )∈Γφ(X,Y ) φ( ˜X)dD( ˜X, ˜Y ). A labeled i.i.d. sample drawn according to Pφ can be simulated using rejection sampling without additional label costs (see Alg. 2 in Appendix B). We denote drawing m random labeled examples according to P by S ←SAMPLE(P, m). For the squared loss on Pφ we have L(w, Pφ) = Z (X,Y )∈Rd ℓ((X, Y ), w) dPφ(X, Y ) (∗) = Z (X,Y )∈Rd ℓ((X, Y ), w) Z ( ˜X, ˜Y )∈Γφ(X,Y ) φ( ˜X) dD( ˜X, ˜Y ) = Z ( ˜X, ˜Y )∈Rd ℓ(( ˜X q φ( ˜X) , ˜Y q φ( ˜X) ), w) φ( ˜X) dD( ˜X, ˜Y ) = Z (X,Y )∈Rd ℓ((X, Y ), w) dD(X, Y ) = L(w, D). The equality (∗) can be rigorously derived from the definition of Lebesgue integration. It follows that also L⋆(D) = L⋆(Pφ) and that w⋆(D) = w⋆(Pφ). We thus denote these by L⋆and w⋆. In 1This is a slight variation of the original result of Hsu and Sabato (2014), see Appendix A. 3 a similar manner, we have ΣPφ = R XX⊤dPφ(X, Y ) = R XX⊤dD(X, Y ) = ΣD. From now on we denote this matrix simply Σ. We denote ∥· ∥Σ by ∥· ∥, and ∥· ∥Σ−1 by ∥· ∥∗. The condition number of Pφ is R2 Pφ = maxx∈suppX(D) ∥x∥2 ∗ φ(x) . If the regression algorithm is applied to n labeled examples drawn from the simulated Pφ, then by Eq. (2) and the equalities above, with probability 1 −δ, if n ≥cR2 Pφ log(c′n) log(c′′/δ)), L( ˆw) −L⋆≤C · log(1/δ) n · EPφ[∥X∥2 ∗(X ⊤w⋆−Y )2] = C · log(1/δ) n · ED[∥X∥2 ∗(X ⊤w⋆−Y )2/φ(X)]. Denote ψ2(x) := ∥x∥2 ∗· ED[(X⊤w⋆−Y )2 | X = x]. Further denote ρ(φ) := ED[ψ2(X)/φ(X)], which we term the risk of φ. Then, if n ≥cR2 Pφ log(c′n) log(c′′/δ), with probability 1 −δ, L( ˆw) −L⋆≤C · ρ(φ) log(1/δ) n . (3) A passive learner essentially uses the default φ, which is constantly 1, for a risk of ρ(1) = E[ψ2(X)]. But the φ that minimizes the bound is the solution to the following minimization problem: Minimizeφ E[ψ2(X)/φ(X)] subject to E[φ(X)] = 1, (4) φ(x) ≥c log(c′n) log(c′′/δ) n ∥x∥2 ∗, ∀x ∈suppX(D). The second constraint is due to the requirement n ≥cR2 Pφ log(c′n) log(c′′/δ). The following lemma bounds the risk of the optimal φ. Its proof is provided in Appendix C. Lemma 3.1. Let φ⋆be the solution to the minimization problem in Eq. (4). Then for n ≥ O(d log(d) log(1/δ)), E2[ψ(X)] ≤ρ(φ⋆) ≤E2[ψ(X)](1 + O(d log(n) log(1/δ)/n)). The ratio between the risk of φ⋆and the risk of the default φ thus approaches E[ψ2(X)]/E2[ψ(X)], and this is also the optimal factor of label complexity reduction. The ratio is 1 for highly symmetric distributions, where the support of DX is on a sphere and all the noise variances are identical. In these cases, active learning is not helpful, even asymptotically. However, in the general case, this ratio is unbounded, and so is the potential for improvement from using active learning. The crucial challenge is that without access to the conditional distribution DY |X, Eq. (4) cannot be solved directly. We consider the oracle risk ρ⋆= E2[ψ(X)], which can be approached if an oracle divulges the optimal φ and n →∞. The goal of the active learner is to approach the oracle guarantee without prior knowledge of DY |X. 4 Approaching the Oracle Bound with Strata To approximate the oracle guarantee, we borrow the stratification approach used in Monte-Carlo function integration (e.g., Glasserman, 2004). Partition suppX(D) into K disjoint subsets A = {A1, . . . , AK}, and consider for φ only functions that are constant on each Ai and such that E[φ(X)] = 1. Each of the functions in this class can be described by a vector a = (a1, . . . , aK) ∈ (R∗ +)K. The value of the function on x ∈Ai is ai P j∈[K] pjaj , where pj := P[X ∈Aj]. Let φa denote a function defined by a, leaving the dependence on the partition A implicit. To calculate the risk of φa, denote µi := E[∥X∥2 ∗(X⊤w⋆−Y )2 | X ∈Ai]. From the definition of ρ(φ), ρ(φa) = X j∈[K] pjaj X i∈[K] pi ai µi. (5) It is easy to verify that a⋆such that a⋆ i = √µi minimizes ρ(φa), and ρ⋆ A := inf a∈RK + ρ(φa) = ρ(φa⋆) = ( X i∈[K] pi √µi)2. (6) 4 ρ⋆ A is the oracle risk for the fixed partition A. In comparison, the standard passive learner has risk ρ(φ1) = P i∈[K] piµi. Thus, the ratio between the optimal risk and the default risk can be as large as 1/ mini pi. Note that here, as in the definition of ρ⋆above, ρ⋆ A might not be achievable for samples up to a certain size, because of the additional requirement that φ not be too small (see Eq. (4)). Nonetheless, this optimistic value is useful as a comparison. Consider an infinite sequence of partitions: for j ∈N, Aj = {Aj 1, . . . , Aj Kj}, with Kj →∞. Similarly to Carpentier and Munos (2012), under mild regularity assumptions, if the partitions have diameters and probabilities that approach zero, then ρ⋆ Aj →ρ(φ⋆), achieving the optimal upper bound for Eq. (3). For a fixed partition A, the challenge is then to approach ρ∗ A without prior knowledge of the true µi’s, using relatively few extra labeled examples. In the next section we describe our active learning algorithm that does just that. 5 Active Learning for Regression To approach the optimal risk ρ∗ A, we need a good estimate of µi for i ∈[K]. Note that µi depends on the optimal predictor w⋆, therefore its value depends on the entire distribution. We assume that the error of the label relative to the optimal predictor is bounded as follows: There exists a b ≥0 such that (x⊤w⋆−y)2 ≤b2∥x∥2 ∗for all (x, y) in the support of D. This boundedness assumption can be replaced by an assumption on sub-Gaussian tails with similar results. Our assumption implies also L⋆= E[(x⊤w⋆−y)2] ≤b2E[∥X∥2 ∗] = b2d, where the last equality follows from Eq. (1). Algorithm 1 Active Regression input Confidence δ ∈(0, 1), label budget m, partition A. output ˆw ∈Rd 1: m1 ←m4/5/2, m2 ←m4/5/2, m3 ←m −(m1 + m2). 2: δ1 ←δ/4, δ2 ←δ/4, δ3 ←δ/2. 3: S1 ←SAMPLE(Pφ[Σ], m1) 4: ˆv ←REG(S1, δ1) 5: ∆← q Cd2b2 log(1/δ1) m1 ; γ ←(b + 2∆)2p K log(2K/δ2)/m2; t ←m2/K. 6: for i = 1 to K do 7: Ti ←SAMPLE(Qi, t). 8: ˜µi ←Θi · 1 t P (x,y)∈Ti(|x⊤ˆv −y| + ∆)2 + γ . 9: ˆai ←√˜µi. 10: end for 11: ξ ←c log(c′m3) log(c′′/δ3) m3 12: Set ˆφ such that for x ∈Ai, ˆφ(x) := ∥x∥2 ∗· ξ + (1 −dξ) ˆai P j pjˆaj . 13: S3 ←SAMPLE(P ˆφ, m3). 14: ˆw ←REG(S3, δ3). Our active regression algorithm, listed in Alg. 1, operates in three stages. In the first stage, the goal is to find a crude loss optimizer ˆv, so as to later estimate µi. To find this optimizer, the algorithm draws a labeled sample of size m1 from the distribution Pφ[Σ], where φ[Σ](x) := 1 dx⊤Σ−1x = 1 d∥x∥2 ∗. Note that ρ(φ[Σ]) = d · E[(Xw⋆−Y )2] = dL⋆. In addition, R2 Pφ[Σ] = d. Consequently, by Eq. (3), applying REG to m1 ≥O(d log(d) log(1/δ1)) random draws from Pφ[Σ] gets, with probability 1−δ1 L(ˆv) −L⋆= ∥ˆv −w⋆∥2 ≤CdL⋆log(1/δ1) m1 ≤Cd2b2 log(1/δ1) m1 . (7) In Needell et al. (2013) a similar distribution is used to speed up gradient descent for convex losses. Here, we make use of φ[Σ] as a stepping stone in order to approach the optimal φ at a rate that does not depend on the condition number of D. Denote by E the event that Eq. (7) holds. In the second stage, estimates for µi, denoted ˜µi, are calculated from labeled samples that are drawn from another set of probability distributions, Qi for i ∈[K]. These distributions are defined as follows. Denote Θi = E[∥X∥4 ∗| X ∈Ai]. For x ∈Rd, y ∈R, let Γi(x, y) = {(˜x, ˜y) ∈Ai × 5 R | x = ˜x ∥˜x∥∗, y = ˜y ∥˜x∥∗}, and define Qi by dQi(X, Y ) = 1 Θi R ( ˜X, ˜Y )∈Γi(X,Y ) ∥˜X∥4 ∗dD( ˜X, ˜Y ). Clearly, for all x ∈suppX(Qi), ∥x∥∗= 1. Drawing labeled examples from Qi can be done using rejection sampling, similarly to Pφ. The use of the Qi distributions in the second stage again helps avoid a dependence on the condition number of D in the convergence rates. In the last stage, a weight function ˆφ is determined based on the estimated ˜µi. A labeled sample is drawn from P ˆφ, and the algorithm returns the predictor resulting from running REG on this sample. The following theorem gives our main result, a finite sample convergence rate guarantee. Theorem 5.1. Let b ≥0 such that (x⊤w⋆−y)2 ≤b2∥x∥2 ∗for all (x, y) in the support of D. Let ΛD = E[∥X∥4 ∗]. If Alg. 1 is executed with δ and m such that m ≥O(d log(d) log(1/δ))5/4, then it draws m labels, and with probability 1 −δ, L( ˆw) −L⋆≤Cρ⋆ A log(3/δ) m + O log(1/δ) m6/5 ρ⋆ A + d1/2Λ1/4 D log5/4(1/δ) m6/5 b1/2ρ⋆ A 3/4 + dΛ1/2 D K1/4 log1/4(K/δ) log(1/δ) m6/5 bρ⋆ A 1/2 ! . The theorem shows that the learning rate of the active learner approaches the oracle rate for the given partition. With an infinite sequence of partitions with K an increasing function of m, the optimal oracle risk can also be approached. The rate of convergence to the oracle rate does not depend on the condition number of D, unlike the passive learning rate. In addition, m = O(d log(d) log(1/δ))5/4 suffices to approach the optimal rate, whereas m = Ω(d) is obviously necessary for any learner. It is interesting that also in active learning for classification, it has been observed that active learning in a non-realizable setting requires a super-linear dependence on d (See, e.g., Dasgupta et al., 2008). Whether this dependence is unavoidable for active regression is an open question. Theorem 5.1 is be proved via a series of lemmas. First, we show that if ˜µi is a good approximation of µi then ρA(ˆφ) can be bounded as a function of the oracle risk for A. Lemma 5.2. Suppose m3 ≥O(d log(d) log(1/δ3)), and let ˆφ as in Alg. 1. If, for some α, β ≥0, µi ≤˜µi ≤µi + αi √µi + βi, (8) then ρA(ˆφ) ≤(1 + O(d log(m3) log(1/δ3)/m3))(ρ⋆ A + ( X i piαi)1/2ρ⋆ A 3/4 + ( X i piβi)1/2ρ⋆ A 1/2). Proof. We have ∀x ∈Ai, ˆφ(x) ≥(1 −dξ) ˆai P j pjˆaj , where ξ = c log(c′m3) log(c′′/δ) m3 . Therefore ρ(ˆφ) ≡E[ψ2(X)/ˆφ(X)] ≤ 1 1 −dξ X j pjˆaj X i pi · E[ψ2(X)/ ˆai | X ∈Ai] = 1 1 −dξ X j pjˆaj X i piµi/ˆai = (1 + dξ 1 −dξ )ρ(φˆa). For m3 ≥O(d log(d) log(1/δ3)), dξ ≤1 2,2 therefore dξ 1−dξ ≤2dξ. It follows ρ(ˆφ) ≤(1 + O(d log(m3) log(1/δ3)/m3))ρ(φˆa). (9) By Eq. (8), ρA(φˆa) = X j pj p ˜µj X i piµi/ p ˜µi ≤ X j pj(√µj + √αjµ1/4 j + p βj) X i pi √µi = ( X i pi √µi)2 + ( X j pj√αjµ1/4 j )( X i pi √µi) + ( X j pj p βj)( X i pi √µi). = ρ⋆ A + ( X j pj√αjµ1/4 j )ρ⋆ A 1/2 + ( X j pj p βj)ρ⋆ A 1/2. 2Using the fact that m ≥O(d log(d) log(1/δ3)) implies m ≥O(d log(m) log(1/δ3)). 6 The last equality is since ρ⋆ A = (P i pi√µi)2. By Cauchy-Schwartz, (P j pj√αjµ1/4 j ) ≤ (P i piαi)1/2ρ⋆ A 3/4. By Jensen’s inequality, P j pj p βj ≤(P j pjβj)1/2. Combined with Eq. (6) and Eq. (9), the lemma directly follows. We now show that Eq. (8) holds and provide explicit values for α and β. Define νi := Θi · EQi[(|X ⊤ˆw −Y | + ∆)2], and ˆνi := Θi t X (x,y)∈Ti (|x ⊤ˆw −y| + ∆)2. Note that ˜µi = ˆνi + Θiγ. We will relate ˆνi to νi, and then νi to µi, to conclude a bound of the form in Eq. (8) for ˜µi. First, note that if m1 ≥O(d log(d) log(1/δ1) and E holds, then for any x ∈∪i∈[K]suppX(Qi), |x ⊤ˆv −x ⊤w⋆| ≤∥x∥∗∥ˆv −w⋆∥≤ s Cd2b2 log(1/δ1) m1 ≡∆. (10) The second inequality stems from ∥x∥∗= 1 for x ∈∪i∈[K]suppX(Qi), and Eq. (7). This is useful in the following lemma, which relates ˆνi with νi. Lemma 5.3. Suppose that m1 ≥O(d log(d) log(1/δ1)) and E holds. Then with probability 1 −δ2 over the draw of T1, . . . , TK, for all i ∈[K], |ˆνi −νi| ≤Θi(b+2∆)2p K log(2K/δ2)/m2 ≡Θiγ. Proof. For a fixed ˆv, ˆνi/Θi is the empirical average of i.i.d. samples of the random variable Z = (|X⊤ˆv −Y | + ∆)2, where (X, Y ) is drawn according to Qi. We now give an upper bound for Z with probability 1. Let ( ˜X, ˜Y ) in the support of D such that X = ˜X/∥˜X∥∗and Y = ˜Y /∥˜X∥∗. Then |X⊤w⋆−Y | = | ˜X⊤w⋆−˜Y |/∥˜X∥∗≤b. If E holds and m1 ≥O(d log(d) log(1/δ1)), Z ≤(|X ⊤ˆv −X ⊤w⋆| + |X ⊤w⋆−Y | + ∆)2 ≤(b + 2∆)2, where the last inequality follows from Eq. (10). By Hoeffding’s inequality, for every i, with probability 1 −δ2, |ˆνi −νi| ≤Θi(b + 2∆)2p log(2/δ2)/t. The statement of the lemma follows from a union bound over i ∈[K] and t = m2/K. The following lemma, proved in Appendix D, provides the desired relationship between νi and µi. Lemma 5.4. If m1 ≥O(d log(d) log(1/δ1)) and E holds, then µi ≤νi ≤µi+4∆√Θiµi+4∆2Θi. We are now ready to prove Theorem 5.1. Proof of Theorem 5.1. From the condition on m and the definition of m1, m3 in Alg. 1 we have m1 ≥O(d log(d/δ1)) and m3 ≥O(d log(d/δ3)). Therefore the inequalities in Lemma 5.4, Lemma 5.3 and Eq. (3) (with n, δ, φ substituted with m3, δ3, ˆφ) hold simultaneously with probability 1 − δ1 −δ2 −δ3. For Eq. (3), note that ∥x∥∗ ˆφ(x) ≥ξ, thus m3 ≥cR2 P ˆ φ log(c′n) log(c′′/δ3) as required. Combining Lemma 5.4 and Lemma 5.3, and noting that ˜µi = ˆνi + Θiγ, we conclude that µi ≤˜µi ≤µi + 4∆ p Θiµi + Θi(4∆2 + 2γ). By Lemma 5.2, it follows that ρA(ˆφ) ≤ρ⋆ A + 2 √ ∆( X i∈[K] pi p Θi)1/2ρ⋆ A 3/4 + p 4∆2 + 2γ · ( X i∈[K] piΘi)1/2ρ⋆ A 1/2 + ¯O(log(m3) m3 ) ≤ρ⋆ A + 2∆1/2Λ1/4 D ρ⋆ A 3/4 + p 4∆2 + 2γ · Λ1/2 D ρ⋆ A 1/2 + ¯O(log(m3)/m3). The last inequality follows since P i∈[K] piΘi = ΛD. We use ¯O to absorb parameters that already appear in the other terms of the bound. Combining this with Eq. (3), L( ˆw) −L⋆≤Cρ⋆ A log(1/δ3) m3 + C log(1/δ3) m3 2∆1/2Λ1/4 D ρ⋆ A 3/4 + (2∆+ p 2γ) · Λ1/2 D ρ⋆ A 1/2 + ¯O(log(m3) m2 3 ). 7 We have γ = (b+2∆)2p K log(2K/δ2)/m2, and ∆= q Cd2b2 log(1/δ1) m1 . For m1 ≥Cd log(1/δ1), ∆≤b √ d, thus γ ≤b2(2 √ d + 1)2p K log(2K/δ2)/m2. Substituting for ∆and γ, we have L( ˆw) −L⋆≤Cρ⋆ A log(1/δ3) m3 + C log(1/δ3) m3 16Cd2b2 log(1/δ1) m1 1/4 Λ1/4 D ρ⋆ A 3/4 + C log(1/δ3) m3 4Cd2b2 log(1/δ1) m1 1/2 + √ 2b(2 √ d + 1) K log(2K/δ2) m2 1/4 ! · Λ1/2 D ρ⋆ A 1/2 + ¯O(log(m3) m2 3 ). To get the theorem, set m3 = m −m4/5, m2 = m1 = m4/5/2, δ1 = δ2 = δ/4, and δ3 = δ/2. 6 Improvement over Passive Learning Theorem 5.1 shows that our active learner approaches the oracle rate, which can be strictly faster than the rate implied by Theorem 2.1 for passive learning. To complete the picture, observe that this better rate cannot be achieved by any passive learner. This can be seen by the following 1-dimensional example. Let σ > 0, α > 1 √ 2, p = 1 2α2 , and η ∈R such that |η| ≤σ α. Let Dη over R × R such that with probability p, X = α and Y = αη + ϵ, where ϵ ∼N(0, σ2), and with probability 1 −p, X = β := q 1−pα2 1−p and Y = 0. Then E[X2] = 1 and w⋆= pα2η. Consider a partition of R such that α ∈A1 and β ∈A2. Then p1 = p, µ1 = Eϵ[α2(ϵ+αη−αw⋆)2] = α2(σ2 +α2η2(1−pα2)) ≤ 3 2α2σ2. In addition, p2 = 1 −p and µ2 = β4w2 ⋆= ( 1−pα2 1−p )2p2α4η2 ≤p2α2σ2 4(1−p)2 . The oracle risk is ρ⋆ A = (p1 √µ1 + p2 √µ2)2 ≤(p r 3 2ασ + (1 −p) pασ 2(1 −p))2 = p2α2σ2( r 3 2 + 1 2)2 ≤2pσ2. Therefore, for the active learner, with probability 1 −δ, L( ˆw) −L⋆≤2Cpσ2 log(1/δ) m + o( 1 m). (11) In contrast, consider any passive learner that receives m labeled examples and outputs a predictor ˆw. Consider the estimator for η defined by ˆη = ˆ w pα2 . ˆη estimates the mean of a Gaussian distribution with variance σ2/α2. The minimax optimal rate for such an estimator is σ2 α2n, where n is the number of examples with X = α.3 With probability at least 1/2, n ≤2mp. Therefore, EDm[(ˆη −η)2] ≥ σ2 4α2mp. It follows that EDm[L( ˆw) −L⋆] = EDm[( ˆw −w)2] = p2α4 · E[(ˆη −η)2] ≥pα2σ2 4m = σ2 4m. Comparing this to Eq. (11), one can see that the ratio between the rate of the best passive learner and the rate of the active learner approaches O(1/p) for large m. 7 Discussion Many questions remain open for active regression. For instance, it is of particular interest whether the convergence rates provided here are the best possible for this model. Second, we consider here only the plain vanilla finite-dimensional regression, however we believe that the approach can be extended to ridge regression in a general Hilbert space. Lastly, the algorithm uses static allocation of samples to stages and to partitions. In Monte-Carlo estimation Carpentier and Munos (2012), dynamic allocation has been used to provide convergence to a pseudo-risk with better constants. It is an open question whether this type of approach can be useful in the case of active regression. References M. F. Balcan, A. Beygelzimer, and J. Langford. Agnostic active learning. Journal of Computer and System Sciences, 75(1):78–89, 2009. 3Since |η| ≤σ α, this rate holds when σ2 n ≪σ2 α2 , that is n ≫α2. (Casella and Strawderman, 1981) 8 R. Burbidge, J. J. Rowland, and R. D. King. Active learning for regression based on query by committee. In Intelligent Data Engineering and Automated Learning-IDEAL 2007, pages 209– 218. Springer, 2007. W. Cai, Y. Zhang, and J. Zhou. Maximizing expected model change for active learning in regression. In Data Mining (ICDM), 2013 IEEE 13th International Conference on, pages 51–60. IEEE, 2013. A. Carpentier and R. Munos. Minimax number of strata for online stratified sampling given noisy samples. In N. H. Bshouty, G. Stoltz, N. Vayatis, and T. Zeugmann, editors, Algorithmic Learning Theory, volume 7568 of Lecture Notes in Computer Science, pages 229–244. Springer Berlin Heidelberg, 2012. G. Casella and W. E. Strawderman. Estimating a bounded normal mean. The Annals of Statistics, 9 (4):870–878, 1981. D. Cohn, L. Atlas, and R. Ladner. Improving generalization with active learning. Machine Learning, 15:201–221, 1994. D. A. Cohn, Z. Ghahramani, and M. I. Jordan. Active learning with statistical models. Journal of Artificial Intelligence Research, 4:129–145, 1996. S. Dasgupta, D. Hsu, and C. Monteleoni. A general agnostic active learning algorithm. In J. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 353–360. MIT Press, 2008. S. Efromovich. Sequential design and estimation in heteroscedastic nonparametric regression. Sequential Analysis, 26(1):3–25, 2007. R. Ganti and A. G. Gray. Upal: Unbiased pool based active learning. In International Conference on Artificial Intelligence and Statistics, pages 422–431, 2012. P. Glasserman. Monte Carlo methods in financial engineering, volume 53. Springer, 2004. L. Gy¨orfi, M. Kohler, A. Krzyzak, and H. Walk. A distribution-free theory of nonparametric regression. Springer, 2002. D. Hsu and S. Sabato. Heavy-tailed regression with a generalized median-of-means. In Proceedings of the 31st International Conference on Machine Learning, volume 32, pages 37–45. JMLR Workshop and Conference Proceedings, 2014. D. Hsu, S. M. Kakade, and T. Zhang. Random design analysis of ridge regression. In Twenty-Fifth Conference on Learning Theory, 2012. T. Kanamori. Statistical asymptotic theory of active learning. Annals of the Institute of Statistical Mathematics, 54(3):459–475, 2002. T. Kanamori and H. Shimodaira. Active learning algorithm using the maximum weighted loglikelihood estimator. Journal of Statistical Planning and Inference, 116(1):149–162, 2003. D. Needell, N. Srebro, and R. Ward. Stochastic gradient descent and the randomized kaczmarz algorithm. arXiv preprint arXiv:1310.5715, 2013. M. Sugiyama. Active learning in approximately linear regression based on conditional expectation of generalization error. The Journal of Machine Learning Research, 7:141–166, 2006. M. Sugiyama and S. Nakajima. Pool-based active learning in approximate linear regression. Machine Learning, 75(3):249–274, 2009. J. Von Neumann. Various techniques used in connection with random digits. Applied Math Series, 12(36-38):1, 1951. D. P. Wiens. Minimax robust designs and weights for approximately specified regression models with heteroscedastic errors. Journal of the American Statistical Association, 93(444):1440–1450, 1998. D. P. Wiens. Robust weights and designs for biased regression models: Least squares and generalized m-estimation. Journal of Statistical Planning and Inference, 83(2):395–412, 2000. 9
|
2014
|
157
|
5,244
|
Distance-Based Network Recovery under Feature Correlation David Adametz, Volker Roth Department of Mathematics and Computer Science University of Basel, Switzerland {david.adametz,volker.roth}@unibas.ch Abstract We present an inference method for Gaussian graphical models when only pairwise distances of n objects are observed. Formally, this is a problem of estimating an n × n covariance matrix from the Mahalanobis distances dMH(xi, xj), where object xi lives in a latent feature space. We solve the problem in fully Bayesian fashion by integrating over the Matrix-Normal likelihood and a MatrixGamma prior; the resulting Matrix-T posterior enables network recovery even under strongly correlated features. Hereby, we generalize TiWnet [19], which assumes Euclidean distances with strict feature independence. In spite of the greatly increased flexibility, our model neither loses statistical power nor entails more computational cost. We argue that the extension is highly relevant as it yields significantly better results in both synthetic and real-world experiments, which is successfully demonstrated for a network of biological pathways in cancer patients. 1 Introduction In this paper we introduce the Translation-invariant Matrix-T process (TiMT) for estimating Gaussian graphical models (GGMs) from pairwise distances. The setup is particularly interesting, as many applications only allow distances to be observed in the first place. Hence, our approach is capable of inferring a network of probability distributions, of strings, graphs or chemical structures. We begin by stating the setup of classical GGMs: The basic building block is matrix e X ∈Rn×d which follows the Matrix-Normal distribution [8] e X ∼N(M, Ψ ⊗Id). (1) The goal is to identify Ψ−1, which encodes the desired dependence structure. More specifically, two objects (= rows) are conditionally independent given all others if and only if Ψ−1 has a corresponding zero element. This is often depicted as an undirected graph (see Figure 1), where the objects are vertices and (missing) edges represent their conditional (in)dependencies. ● ● ● ● ● ● ● ● ● ● Figure 1: Precision matrix Ψ−1 and its interpretation as a graph (self-loops are typically omitted). Prabhakaran et al. [19] formulated the Translation-invariant Wishart Network (TiWnet), which treats e X as a latent matrix and only requires their squared Euclidean distances Dij = dE(exi, exj)2, where 1 exi ∈Rd is the ith row of e X. Also, SE = e X e X⊤refers to the n × n inner-product matrix, which is linked via Dij = SE,ii + SE,jj −2 SE,ij. Importantly, the transition to distances implies that means of the form M = 1nw⊤with w ∈Rd are not identifiable anymore. In contrast to the above, we start off by assuming a matrix X := e XΣ 1 2 ∼N(M, Ψ ⊗Σ), (2) where the columns (= features) are correlated as defined by Σ ∈Rd×d. Due to this change, the inner-product becomes SMH = XX⊤= e XΣ e X⊤. If we directly observed X as in classical GGMs, then Σ could be removed to recover e X, however, in the case of distances, the impact of Ψ and Σ is inevitably mixed. A suitable assumption is therefore the squared Mahalanobis distance Dij = dMH(xi, xj)2 = (exi −exj)⊤Σ(exi −exj), (3) which dramatically increases the degree of freedom for inference about Ψ. Recall that in our setting only D is observed and the following is latent: d, X, e X, S := SMH, Σ and M = 1nw⊤. The main difficulty comes from the inherent mixture effect of Ψ and Σ in the distances, which blurs or obscures what is relevant in GGMs. For example, if we naively enforce Σ = Id, then all of the information is solely attributed to Ψ. However, in applications where the true Σ ̸= Id, we would consequently infer false structure, up to a degree where the result is completely mislead by feature correlation. In pure Bayesian fashion, we specify a prior belief for Σ and average over all realizations weighted by the Gaussian likelihood. For a conjugate prior, this leads to the Matrix-T distribution, which forms the core part of our approach. The resulting model generalizes TiWnet and is flexible enough to account for arbitrary feature correlation. In the following, we briefly describe a practical application with all the above properties. Example: A Network of Biological Pathways Using DNA microarrays, it is possible to measure the expression levels of thousands of genes in a patient simultaneously, however, each gene is highly prone to noise and only weakly informative when analyzed on its own. To solve this problem, the focus is shifted towards pathways [5], which can be seen as (non-disjoint) groups of genes that contribute to high-level biological processes. The underlying idea is that genes exhibit visible patterns only when paired with functionally related entities. Hence, every pathway has a characteristic distribution of gene expression values, which we compare via the so-called Bhattacharyya distance [2, 11]. Our goal is then to derive a network between pathways, but what if the patients (= features) from whom we obtained the cells were correlated (sex, age, treatment, . . .)? Σ = Id Σ = Id Σ = Id Σ Σ X D M = 1nwt S X X = t M = v1t M = 0n×d d TiWnet TiMT gL model input means feature correlation gL TRCM Figure 2: The big picture. Different assumptions about M and Σ lead to different models. Related work Inference in GGMs is generally aimed at Ψ−1 and therefore every approach relies on Eq. (1) or (2), however, they differ in their assumptions about M and Σ. Figure 2 puts our setting into a larger context and describes all possible configurations in a single scheme. Throughout the paper, we assume there are n objects and an unknown number of d latent features. Since our inputs are pairwise distances D, the mean is of the form M = 1nw⊤, but at the same time, we do not 2 impose any restriction on Σ. A complementary assumption is made in TiWnet [19], which enforces strict feature independence. For the models based on matrix X, the mean matrix is defined as M = v1⊤ d with v ∈Rn. This choice is neither better nor worse—it does not rely on pairwise distances and hence addresses a different question. By further assuming Σ = Id, we arrive at the graphical LASSO (gL) [7] that optimizes the likelihood under an L1 penalty. The Transposable Regularized Covariance Model (TRCM) [1] is closely related, but additionally allows arbitrary Σ and alternates between estimating Ψ−1 and Σ−1. The basic configuration for S, M = 0n×d and Σ = Id, also leads to the model of gL, however this rarely occurs in practice. 2 Model On the most fundamental level, our task deals with incorporating invariances into the Gaussian model, meaning it must not depend on any unrecoverable feature information, i.e. Σ, M = 1nw⊤ (vanishes for distances) and d. The starting point is the log-likelihood of Eq. (2) ℓ(W, Σ, M ; X) = d 2 log |W| −n 2 log |Σ| −1 2tr W(X −M)Σ−1(X −M)⊤ , (4) where we used the shorthand W := Ψ−1. In the literature, there exist two conceptually different approaches to achieve invariances: the first is the classical marginal likelihood [12], closely related to the profile likelihood [16], where a nuisance parameter is either removed by a suitable statistic or replaced by its corresponding maximum likelihood estimate [9]. The second approach follows the Bayesian marginal likelihood by introducing a prior and integrating over the product. Hereby, the posterior is a weighted average, where the weights are distributed according to prior belief. The following sections will discuss the required transformations of Eq. (4). 2.1 Marginalizing the Latent Feature Correlation 2.1.1 Classical Marginal Likelihood Let us begin with the attempt to remove Σ by explicit reconstruction, as done in McCullagh [13]. Computing the derivative of Eq. (4) with respect to Σ and setting it to zero, we arrive at the maximum likelihood estimate bΣ = 1 n(X −M)⊤W(X −M), which leads to ℓ(W, M ; X, bΣ) = d 2 log |W| −n 2 log |bΣ| −1 2tr(W(X −M)bΣ−1(X −M)⊤) (5) = d 2 log |W| −n 2 log |W(X −M)(X −M)⊤|. (6) Eq. (6) does not depend on Σ anymore, however, note that there is a hidden implication in Eq. (5): bΣ−1 only exists if bΣ has full rank, or equivalently, if d ≤n. Further, even d = n must be excluded, since Eq. (6) would become independent of X otherwise. McCullagh [13] analyzed the Fisher information for varying d and concluded that this model is “a complete success” for d ≪n, but “a spectacular failure” if d →n. Since distance matrices typically require d ≥n, the approach does not qualify. 2.1.2 Bayesian Marginal Likelihood Iranmanesh et al. [10] analyzed the Matrix-Normal likelihood in Eq. (4) in conjunction with an Inverse Matrix-Gamma (IMG) prior—the latter being a generalization of an inverse Wishart prior. It is denoted by Σ ∼IMG(α, β, Ω), where α > 1 2(d −1) and β > 0 are shape and scale parameters, respectively. Ωis a d × d positive-definite matrix reflecting the expectation of Σ. This combination leads to the so-called (Generalized) Matrix T-distribution1 X ∼T (α, β, M, W, Ω) with likelihood ℓ(W, M ; α, β, X, Ω) = d 2 log |W| −(α + n 2 ) log |In + β 2 W(X −M)Ω−1(X −M)⊤|. (7) Compared to the classical marginal likelihood, the obvious differences are In and scalar β, which can be seen as regularization. The limit of β →∞implies that no regularization takes place 1Choosing an inverse Wishart prior for Σ results in the standard Matrix T-distribution, however its variance can only be controlled by an integer. This is why the Generalized Matrix T-distribution is preferred. 3 and, interestingly, this likelihood resembles Eq. (6). The other extreme β →0 leads to a likelihood that is independent of X. Another observation is that the regularization ensures full rank of In + β 2 W(X −M)Ω−1(X −M)⊤, hence any d ≥1 is valid. At this point, the Bayesian approach reveals a fundamental advantage: For TiWnet, the distance matrix enforced independent features, but now, we are in a position to maintain the full model while adjusting the hyperparameters instead. We propose Ω≡Id, meaning the prior of Σ will be centered at independent latent features, which is a common and plausible choice before observing any data. The flexibility ultimately comes from α and β when defining a flat prior, which means deviations from independent features are explicitly allowed. 2.2 Marginalizing the Latent Means The fact that we observe a distance matrix D implies that information about the (feature) coordinate system is irrevocably lost, namely M = 1w⊤, which is why the means must be marginalized. We briefly discuss the necessary steps, but for an in-depth review please refer to [19, 14, 17]. Following the classical marginalization, it suffices to define a projection L ∈R(n−1)×n with property L1n = 0n−1. In other words, all biases of the form 1nw⊤are mapped to the nullspace of L. The Matrix T-distribution under affine transformations [10, Theorem 3.2] reads LX ∼T (α, β, LM, LΨL⊤, Ω) and in our case (Ω= Id, LM = L1nw⊤= 0(n−1)×d), we have ℓ(Ψ ; α, β, LX) = −d 2 log |LΨL⊤| −(α + n−1 2 ) log |In + β 2 L⊤(LΨL⊤)−1LXX⊤|. (8) Note that due to the statistic LX, the likelihood is constant over all X (or S) mapping to the same D. As we are not interested in any specifics about L other than its nullspace, we replace the image with the kernel of the projection and define matrix Q := In −(1⊤ n W1n)−11n1⊤ n W. Using the identity QSQ⊤= −1 2QDQ⊤and Q⊤WQ = WQ, we can finally write the likelihood as ℓ(W ; α, β, D, 1n) = d 2 log |W| −d 2 log(1⊤ n W1n) −(α + n−1 2 ) log |In −β 4 WQD|, (9) which accounts for arbitrary latent feature correlation Σ and all mean matrices M = 1nw⊤. In hindsight, the combination of Bayesian and classical marginal likelihood might appear arbitrary, but both strategies have their individual strengths. Mean matrix M, for example, is limited to a single direction in an n dimensional space, therefore the statistic LX represents a convenient solution. In contrast, the rank-d matrix Σ affects a much larger spectrum that cannot be handled in the same fashion—ignoring this leads to a degenerate likelihood as previously shown. The problem is only tractable when specifying a prior belief for Bayesian marginalization. On a side note, the Bayesian posterior includes the classical marginal likelihood for the choice of an improper prior [4], which could be seen in the Matrix-T likelihood, Eq. (7), in the limit of β →∞. 3 Inference The previous section developed a likelihood for GGMs that conforms to all aspects of information loss inherent to distance matrices. As our interest lies in the network-defining W, the following will discuss Bayesian inference using a Markov chain Monte Carlo (MCMC) sampler. Hyperparameters α, β and d At some point in every Bayesian analysis, all hyperparameters need to be specified in a sensible manner. Currently, the occurrence of d in Eq. (9) is particularly problematic, since (i) the number of latent features is unknown and (ii) it critically affects the balance between determinants. To resolve this issue, recall that α must satisfy α > 1 2(d −1), which can alternatively be expressed as α = 1 2(vd −n + 1) with v > 1 + n−2 d . Thereby, we arrive at ℓ(W ; v, β, D, 1n) = d 2 log |W| −d 2 log(1⊤ n W1n) −vd 2 log |In −β 4 WQD|, (10) where d now influences the likelihood on a global level and can be used as temperature reminiscent of simulated annealing techniques for optimization. In more detail, we initialize the MCMC sampler with a small value of d and increase it slowly, until the acceptance ratio is below, say, 1 percent. After that event, all samples of W are averaged to obtain the final network. Parameter v and β still play a crucial role in the process of inference, as they distribute the probability mass across all latent feature correlations and effectively control the scope of plausible Σ. Upon 4 Algorithm 1 One loop of the MCMC sampler Input: distance matrix D, temperature d and fixed v > 1 + n−2 d for i = 1 to n do W (p) ←W, (p) refers to proposal Uniformly select node k ̸= i and sample element W (p) ik from {−1, 0, +1} Set W (p) ki ←W (p) ik and update W (p) ii and W (p) kk accordingly Compute posterior in Eq. (12) and acceptance of W (p) if u ∼U(0, 1) < acceptance then W ←W (p) end if end for Sample proposal β(p) ∼Γ(βshape, βscale) Compute posterior in Eq. (12) and acceptance of β(p) if u ∼U(0, 1) < acceptance then β ←β(p) end if closer inspection, we gain more insight by the variance of the Matrix-T distribution, 2(Ψ ⊗Ω) β(v d −2 n + 1), (11) which is maximal when β and v are jointly small. We aim for the most flexible solution, thus v is fixed at the smallest possible value and β is stochastically integrated out in a Metropolis-Hastings step. A suitable choice is a Gamma prior β ∼Γ(βshape, βscale); its shape and scale must be chosen to be sufficiently flexible on the scale of the distance matrix at hand. Priors for W The prior for W is first and foremost required to be sparse and flexible. There are many valid choices, like spike and slab [15] or partial correlation [3], but we adapt the twocomponent scheme of TiWnet, which has computational advantages and enables symmetric random walks. The following briefly explains the construction: Prior p1(W) defines a symmetric random matrix, where off-diagonal elements Wij are uniform on {−1, 0, +1}, i.e. an edge with positive/negative weight or no edge. The diagonal is chosen such that W is positive definite: Wii ←ϵ + P j̸=i |Wij|. Although this only allows 3 levels, it proved to be sufficiently flexible in practice. Replacing it with more levels is possible, but conceptually identical. The second component is a Laplacian p2(W | λ) ∝exp −λ Pn i=1(Wii −ϵ) and induces sparsity. Here, the total number of edges in the network is penalized by parameter λ > 0. Combining the likelihood of Eq. (10) and the above priors, the final posterior reads: p(W | • ) = p(D | W, β, 1n) p1(W) p2(W | λ) p3(β | βshape, βscale). (12) The full scheme of the MCMC sampler is reported in Algorithm 1. Complexity Analysis The runtime of Algorithm 1 is primarily determined by the repeated evaluation of the posterior in Eq. (12), which would require O(n4) in the naive case of fully recomputing the determinants. Every flip of an edge, however, only changes a maximum of 4 elements2 in W, which gives rise to an elegant update scheme building on the QR decomposition. Theorem. One full loop in Algorithm 1 requires O(n3). Proof. Due to the 3-level prior, there are only 6 possible flip configurations depending on the current edge between object i and j (2 examples depicted here for i = 1, j = 3): ∆W := W (p) −W ⇔ ("−1 0 +1 0 0 0 +1 0 −1 # , . . . , " 0 0 +2 0 0 0 +2 0 0 #) (13) An important observation is that ∆W can solely be expressed in terms of rank-1 matrices, in particular either uv⊤or uv⊤+ ab⊤. If we know the QR decomposition of W, then the decomposition 2This also holds for more than 3 edge levels. 5 of W (p) can be found in O(n2). Consequently, its determinant is obtained by det(QR) = Qn i=1 Rii in O(n). Our goal is to exploit this property and express both determinants of the posterior as rank-1 updates to their existing QR decompositions. Restating the likelihood, we have ℓ(W (p) ; •) = d 2 log |W (p)| | {z } =: det1 −d 2 log(1⊤ n W (p)1n) −vd 2 log |In −β 4 W (p)QD| | {z } =: det2 . (14) Updating det1 corresponds to either W (p) = W + uv⊤or W (p) = W + uv⊤+ ab⊤as explained in Eq. (13), thus leading to O(n2). We reformulate det2 to follow the same scheme: det2 = In −β 4 W In − 1 1⊤ n W 1n 1n1⊤ n W D −β 4 h 1 1⊤ n W 1n −γ W1n −γ v⊤1n u + b⊤1n a i DW1n ⊤ −β 4 h u −γ 1⊤ n u W1n + v⊤1n u + b⊤1n a i Dv ⊤ −β 4 h a −γ 1⊤ n a W1n + v⊤1n u + b⊤1n a i Db ⊤. (15) For notational convenience, we defined the shorthand γ := 1 1⊤ n W (p)1n = 1 1⊤ n (W + uv⊤+ ab⊤)1n = 1 1⊤ n W1n + (1⊤ n u)(v⊤1n) + (1⊤ n a)(b⊤1n) . Note that the determinant of the first line in Eq. (15) is already known (i.e. its QR decomposition) and the following 3 lines are only rank-1 updates as indicated by parenthesis. Therefore, det2 is computed in 3 steps, each consuming O(n2). For some of the 6 flip configurations, we even have a = b = 0n, which renders the last line in Eq. (15) obsolete and simplifies the remaining terms. Since the for loop covers n flips, all updates contribute as n·O(n2). There is no shortcut to evaluate proposal β(p) given β, thus its posterior is recomputed from scratch in O(n3). Therefore, Algorithm 1 has an overall complexity of O(n3), which is the same as TiWnet. 4 Experiments 4.1 Synthetic Data We first look at synthetic data and compare how well the recovered network matches the true one. Hereby, the accuracy is measured by the f-score using the edges (positive/negative/zero). Independent Latent Features Since TiMT is a generalization for arbitrary Σ, it must also cover Σ ≡Id, thus, we generate a set of 100 Gaussian-distributed matrices X with known W and Σ = Id, where n = 30 and d = 300. Next, we add column translations 1nw⊤with elements in w ∈Rd being Gamma distributed, however these do not enter D by definition. As TRCM does not account for column shifts, it is used in conjunction with the true, unshifted matrix X (hence TRCM.u). All methods require a regularization parameter, which obviously determines the outcome. In particular, TiWnet and TiMT use the same, constant parameter throughout all 100 distance matrices and obtain the final W via annealing. Concerning TRCM and gL, we evaluate each X on a set of parameters and only report the highest f-score per data set. This is in strong favor of the competition. Boxplots of the achieved f-scores and the false positive rates are depicted in Figure 3, left. As can be seen, TiMT and TiWnet score as high as TRCM.u without knowledge of features or feature translations. We omit gL from the comparison due to a model mismatch regarding M, meaning it will naturally fall short. Instead, the interested reader is pointed to extensive results in [19]. The gist of this experiment is that all methods work well when the model requirements are met. Also, translating the individual features and obscuring them does not impair TiWnet and TiMT. Correlated Latent Features The second experiment is similar to the first one (n = 30, d = 300 and column shifts), but it additionally introduces feature correlation. Here, Σ is generated by sampling a matrix G ∼N(0d×5d, Id ⊗I5d) and adding Gamma distributed vector a ∈R5d to randomly selected rows of G. The final feature covariance matrix is given by Σ = 1 5dGG⊤. 6 MODEL MISMATCH TRCM.u TRCM TiWnet gL TiMT TRCM.u TRCM TiWnet gL TiMT MODEL MISMATCH F−score 0.0 0.2 0.4 0.6 0.8 1.0 False positive rate 0.0 0.2 0.4 0.6 0.8 1.0 F−score 0.0 0.2 0.4 0.6 0.8 1.0 False positive rate Independent Latent Features Correlated Latent Features 0.0 0.2 0.4 0.6 0.8 1.0 TRCM.u TiWnet TiMT TRCM.u TiWnet TiMT Figure 3: Results for synthetic data. Translations do not apply to TRCM.u. Models with violated assumptions (M and/or Σ) are highlighted with gray bars. Due to the dramatically increased degree of freedom, all methods are impacted by lower f-scores (see Figure 3, right). As expected, TRCM.u performs best in terms of f-score, which is based on the unshifted full data matrix X with an individually optimized regularization parameter. TiMT, however, follows by a slim margin. On the contrary, TiWnet explains the similarities exclusively by adding more (unnecessary) edges, which is reflected in its increased, but strongly consistent false positive rate. This issue leads to a comparatively low f-score that is even below the remaining contenders. Finally, Figure 4 shows an example network and its reconstruction. Keeping in mind the drastic information loss between true X30×300 and D30×30, TiMT performs extremely well. ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●●●●●●●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●●●●●●●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●●●●●●●●● ● ● True network TiMT TiWnet Figure 4: An example for synthetic data with feature correlation. The network inferred by TiMT (center) is relatively close to ground truth (left), however TiWnet (right) is apparently mislead by Σ. Black/red edges refer to +/−edge weight. 4.2 Real-World Data: A Network of Biological Pathways In order to demonstrate the scalability of TiMT, we apply it to the publicly available colon cancer dataset of Sheffer et al. [20], which is comprised of 13 437 genes measured across 182 patients. Using the latest gene sets from the KEGG3 database, we arrive at n = 276 distinct pathways. After learning the mean and variance of each pathway as the distribution of its gene expression values across patients, the Bhattacharyya distances [11] are computed as a 276×276 matrix D. The pathways are allowed to overlap via common genes, thus leading to similarities, however it is unclear how and to what degree the correlation of patients affects the inferred network. For this purpose, we run TiMT alongside TiWnet with identical parameters for 20 000 samples and report the annealed networks in Figure 5. Again, the difference in topology is only due to latent feature correlation. Runtime on a standard 3 GHz PC was 3:10 hours for TiMT, while a naive implementation in O(n4) finished after ∼20 hours. TiWnet performed slightly better at around 3 hours, since the model does not have hyperparameter β to control feature correlation. 3http://www.genome.jp/kegg/, accessed in May 2014 7 TiMT ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● TiWnet 3 19 25 82 89 91 96 98 114 115 114 98 0 1 96 96 114 98 3 22 33 60 79 82 89 91 96 97 98 114 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● Figure 5: A network of pathways in colon cancer patients, where each vertex represents one pathway. From both results, we extract a subgraph of 3 pathways including all neighbors in reach of 2 edges. The matrix on the bottom shows external information on pathway similarity based on their relative number of protein-protein interactions. Black/red edges refer to +/−edge weight. Without side information it is not possible to confirm either result, hence we resort to expert knowledge for protein-protein interactions from the BioGRID4 database and compute the strength of connection between pathways as the number of interactions relative to their theoretical maximum. Using this, we can easily check subnetworks for plausibility (see Figure 5, center): The black vertices 96, 98 and 114 correspond to base excision repair, mismatch repair and cell cycle, which are particularly interesting as they play a key role in DNA mutation. These pathways are known to be strongly dysregulated in colon cancer and indicate an elevated susceptibility [18, 6]. The topology of these 3 pathways for TiMT is fully supported by protein interactions, i.e. 98 is the link between 114 and 96 and removing it renders 96 and 98 independent. TiWnet, on the contrary, overestimates the network and produces a highly-connected structure contradicting the evidence. This is a clear indicator for latent feature correlation. 5 Conclusion We presented the Translation-invariant Matrix-T process (TiMT) as an elegant way to make inference in Gaussian graphical models when only pairwise distances are available. Previously, the inherent information loss about underlying features appeared to prevent any conclusive statement about their correlation, however, we argue that neither assumed full independence nor maximum likelihood estimation is reasonable in this context. Our contribution is threefold: (i) A Bayesian relaxation solves the issue of strict feature independence in GGMs. The assumption is now shifted into the prior, but flat priors are possible. (ii) The approach generalizes TiWnet, but maintains the same complexity, thus, there is no reason to retain the simplified model. (iii) TiMT for the first time accounts for all latent parameters of the Matrix Normal without access to the latent data matrix X. The distances D are fully sufficient. In synthetic experiments, we observed a substantial improvement over TiWnet, which highly overestimated the networks and falsly attributed all information to the topological structure. At the same time, TiMT performed almost on par with TRCM(.u), which operates under hypothetical, optimal conditions. This demonstrates that all aspects of information loss can be handled exceptionally well. Finally, the network of biological pathways provided promising results for a domain of non-vectorial objects, which effectively precludes all methods except for TiMT and TiWnet. Comparing these two, the considerable difference in network topology only goes to show that invariance against latent feature correlation is indispensable—especially pertaining to distances. 4http://thebiogrid.org, version 3.2 8 References [1] G. Allen and R. Tibshirani. Transposable Regularized Covariance Models with an Application to Missing Data Imputation. The Annals of Applied Statistics, 4:764–790, 2010. [2] A. Bhattacharyya. On a Measure of Divergence between Two Statistical Populations Defined by Their Probability Distributions. Bulletin of the Calcutta Mathematical Society, 35:99–109, 1943. [3] M. Daniels and M. Pourahmadi. Modeling Covariance Matrices via Partial Autocorrelations. Journal of Multivariate Analysis, 100(10):2352–2363, 2009. [4] A. de Vos and M. Francke. Bayesian Unit Root Tests and Marginal Likelihood. Technical report, Department of Econometrics and Operation Researchs, VU University Amsterdam, 2008. [5] L. Ein-Dor, O. Zuk, and E. Domany. Thousands of Samples are Needed to Generate a Robust Gene List for Predicting Outcome in Cancer. In Proceedings of the National Academy of Sciences, pages 5923–5928, 2006. [6] P. Fortini, B. Pascucci, E. Parlanti, M. D’Errico, V. Simonelli, and E. Dogliotti. The Base Excision Repair: Mechanisms and its Relevance for Cancer Susceptibility. Biochimie, 85(11):1053–1071, 2003. [7] J. Friedman, T. Hastie, and R. Tibshirani. Sparse Inverse Covariance Estimation with the Graphical Lasso. Biostatistics, 9(3):432–441, 2008. [8] A. K. Gupta and D. K. Nagar. Matrix Variate Distributions. PMS Series. Addison-Wesley Longman, 1999. [9] D. Harville. Maximum Likelihood Approaches to Variance Component Estimation and to Related Problems. Journal of the American Statistical Association, 72(358):320–338, 1977. [10] A. Iranmanesh, M. Arashi, and S. Tabatabaey. On Conditional Applications of Matrix Variate Normal Distribution. Iranian Journal of Mathematical Sciences and Informatics, pages 33–43, 2010. [11] T. Jebara and R. Kondor. Bhattacharyya and Expected Likelihood Kernels. In Conference on Learning Theory, 2003. [12] J. Kalbfleisch and D. Sprott. Application of Likelihood Methods to Models Involving Large Numbers of Parameters. Journal of the Royal Statistical Society. Series B (Methodological), 32(2):175–208, 1970. [13] P. McCullagh. Marginal Likelihood for Parallel Series. Bernoulli, 14:593–603, 2008. [14] P. McCullagh. Marginal Likelihood for Distance Matrices. Statistica Sinica, 19:631–649, 2009. [15] T. Mitchell and J. Beauchamp. Bayesian Variable Selection in Linear Regression. Journal of the American Statistical Association, 83(404):1023–1032, 1988. [16] S. Murphy and A. van der Vaart. On Profile Likelihood. Journal of the American Statistical Association, 95:449–465, 2000. [17] H. Patterson and R. Thompson. Recovery of Inter-Block Information when Block Sizes are Unequal. Biometrika, 58(3):545–554, 1971. [18] P. Peltom¨aki. DNA Mismatch Repair and Cancer. Mutation Research, 488(1):77–85, 2001. [19] S. Prabhakaran, D. Adametz, K. J. Metzner, A. B¨ohm, and V. Roth. Recovering Networks from Distance Data. JMLR, 92:251–283, 2013. [20] M. Sheffer, M. D. Bacolod, O. Zuk, S. F. Giardina, H. Pincas, F. Barany, P. B. Paty, W. L. Gerald, D. A. Notterman, and E. Domany. Association of Survival and Disease Progression with Chromosomal Instability: A Genomic Exploration of Colorectal Cancer. In Proceedings of the National Academy of Sciences, pages 7131–7136, 2009. 9
|
2014
|
158
|
5,245
|
Rounding-based Moves for Metric Labeling M. Pawan Kumar Ecole Centrale Paris & INRIA Saclay pawan.kumar@ecp.fr Abstract Metric labeling is a special case of energy minimization for pairwise Markov random fields. The energy function consists of arbitrary unary potentials, and pairwise potentials that are proportional to a given metric distance function over the label set. Popular methods for solving metric labeling include (i) move-making algorithms, which iteratively solve a minimum st-cut problem; and (ii) the linear programming (LP) relaxation based approach. In order to convert the fractional solution of the LP relaxation to an integer solution, several randomized rounding procedures have been developed in the literature. We consider a large class of parallel rounding procedures, and design move-making algorithms that closely mimic them. We prove that the multiplicative bound of a move-making algorithm exactly matches the approximation factor of the corresponding rounding procedure for any arbitrary distance function. Our analysis includes all known results for move-making algorithms as special cases. 1 Introduction A Markov random field (MRF) is a graph whose vertices are random variables, and whose edges specify a neighborhood over the random variables. Each random variable can be assigned a value from a set of labels, resulting in a labeling of the MRF. The putative labelings of an MRF are quantitatively distinguished from each other by an energy function, which is the sum of potential functions that depend on the cliques of the graph. An important optimization problem associate with the MRF framework is energy minimization, that is, finding a labeling with the minimum energy. Metric labeling is a special case of energy minimization, which models several useful low-level vision tasks [3, 4, 18]. It is characterized by a finite, discrete label set and a metric distance function over the labels. The energy function in metric labeling consists of arbitrary unary potentials and pairwise potentials that are proportional to the distance between the labels assigned to them. The problem is known to be NP-hard [20]. Two popular approaches for metric labeling are: (i) movemaking algorithms [4, 8, 14, 15, 21], which iteratively improve the labeling by solving a minimum st-cut problem; and (ii) linear programming (LP) relaxation [5, 13, 17, 22], which is obtained by dropping the integral constraints in the corresponding integer programming formulation. Movemaking algorithms are very efficient due to the availability of fast minimum st-cut solvers [2] and are very popular in the computer vision community. In contrast, the LP relaxation is significantly slower, despite the development of specialized solvers [7, 9, 11, 12, 16, 19, 22, 23, 24, 25]. However, when used in conjunction with randomized rounding algorithms, the LP relaxation provides the best known polynomial-time theoretical guarantees for metric labeling [1, 5, 10]. At first sight, the difference between move-making algorithms and the LP relaxation appears to be the standard accuracy vs. speed trade-off. However, for some special cases of distance functions, it has been shown that appropriately designed move-making algorithms can match the theoretical guarantees of the LP relaxation [14, 15, 20]. In this paper, we extend this result for a large class of randomized rounding procedures, which we call parallel rounding. In particular we prove that for any arbitrary (semi-)metric distance function, there exist move-making algorithms that match the theoretical guarantees provided by parallel rounding. The proofs, the various corollaries of our 1 theorems (which cover all previously known guarantees) and our experimental results are deferred to the accompanying technical report. 2 Preliminaries Metric Labeling. The problem of metric labeling is defined over an undirected graph G = (X, E). The vertices X = {X1, X2, · · · , Xn} are random variables, and the edges E specify a neighborhood relationship over the random variables. Each random variable can be assigned a value from the label set L = {l1, l2, · · · , lh}. We assume that we are also provided with a metric distance function d : L × L →R+ over the labels. We refer to an assignment of values to all the random variables as a labeling. In other words, a labeling is a vector x ∈Ln, which specifies the label xa assigned to each random variable Xa. The hn different labelings are quantitatively distinguished from each other by an energy function Q(x), which is defined as follows: Q(x) = X Xa∈X θa(xa) + X (Xa,Xb)∈E wabd(xa, xb). Here, the unary potentials θa(·) are arbitrary, and the edge weights wab are non-negative. Metric labeling requires us to find a labeling with the minimum energy. It is known to be NP-hard. Multiplicative Bound. As metric labeling plays a central role in low-level vision, several approximate algorithms have been proposed in the literature. A common theoretical measure of accuracy for an approximate algorithm is the multiplicative bound. In this work, we are interested in the multiplicative bound of an algorithm with respect to a distance function. Formally, given a distance function d, the multiplicative bound of an algorithm is said to be B if the following condition is satisfied for all possible values of unary potentials θa(·) and non-negative edge weights wab: X Xa∈X θa(ˆxa) + X (Xa,Xb)∈E wabd(ˆxa, ˆxb) ≤ X Xa∈X θa(x∗ a) + B X (Xa,Xb)∈E wabd(x∗ a, x∗ b). (1) Here, ˆx is the labeling estimated by the algorithm for the given values of unary potentials and edge weights, and x∗is an optimal labeling. Multiplicative bounds are greater than or equal to 1, and are invariant to reparameterizations of the unary potentials. A multiplicative bound B is said to be tight if the above inequality holds as an equality for some value of unary potentials and edge weights. Linear Programming Relaxation. An overcomplete representation of a labeling can be specified using the following variables: (i) unary variables ya(i) ∈{0, 1} for all Xa ∈X and li ∈L such that ya(i) = 1 if and only if Xa is assigned the label li; and (ii) pairwise variables yab(i, j) ∈{0, 1} for all (Xa, Xb) ∈E and li, lj ∈L such that yab(i, j) = 1 if and only if Xa and Xb are assigned labels li and lj respectively. This allows us to formulate metric labeling as follows: min y X Xa∈X X li∈L θa(li)ya(i) + X (Xa,Xb)∈E X li,lj∈L wabd(li, lj)yab(i, j), s.t. X li∈L ya(i) = 1, ∀Xa ∈X, X lj∈L yab(i, j) = ya(i), ∀(Xa, Xb) ∈E, li ∈L, X li∈L yab(i, j) = yb(j), ∀(Xa, Xb) ∈E, lj ∈L, ya(i) ∈{0, 1}, yab(i, j) ∈{0, 1}, ∀Xa ∈X, (Xa, Xb) ∈E, li, lj ∈L. By relaxing the final set of constraints such that the optimization variables can take any value between 0 and 1 inclusive, we obtain a linear program (LP). The computational complexity of solving the LP relaxation is polynomial in the size of the problem. Rounding Procedure. In order to prove theoretical guarantees of the LP relaxation, it is common to use a rounding procedure that can covert a feasible fractional solution y of the LP relaxation to a feasible integer solution ˆy of the integer linear program. Several rounding procedures have been 2 proposed in the literature. In this work, we focus on the randomized parallel rounding procedures proposed in [5, 10]. These procedures have the property that, given a fractional solution y, the probability of assigning a label li ∈L to a random variable Xa ∈X is equal to ya(i), that is, Pr(ˆya(i) = 1) = ya(i). (2) We will describe the various rounding procedures in detail in sections 3-5. For now, we would like to note that our reason for focusing on the parallel rounding of [5, 10] is that they provide the best known polynomial-time theoretical guarantees for metric labeling. Specifically, we are interested in their approximation factor, which is defined next. Approximation Factor. Given a distance function d, the approximation factor for a rounding procedure is said to be F if the following condition is satisfied for all feasible fractional solutions y: E X li,lj∈L d(li, lj)ˆya(i)ˆyb(j) ≤F X li,lj∈L d(li, lj)yab(i, j). (3) Here, ˆy refers to the integer solution, and the expectation is taken with respect to the randomized rounding procedure applied to the feasible solution y. Given a rounding procedure with an approximation factor of F, an optimal fractional solution y∗of the LP relaxation can be rounded to a labeling ˆy that satisfies the following condition: E X Xa∈X X li∈L θa(li)ˆya(i) + X (Xa,Xb)∈E X li,lj∈L wabd(li, lj)ˆya(i)ˆyb(j) ≤ X Xa∈X X li∈L θa(li)y∗ a(i) + F X (Xa,Xb)∈E X li,lj∈L wabd(li, lj)y∗ ab(i, j). The above inequality follows directly from properties (2) and (3). Similar to multiplicative bounds, approximation factors are always greater than or equal to 1, and are invariant to reparameterizations of the unary potentials. An approximation factor F is said to be tight if the above inequality holds as an equality for some value of unary potentials and edge weights. Submodular Energy Function. We will use the following important fact throughout this paper. Given an energy function defined using arbitrary unary potentials, non-negative edge weights and a submodular distance function, an optimal labeling can be computed in polynomial time by solving an equivalent minimum st-cut problem [6]. Recall that a submodular distance function d′ over a label set L = {l1, l2, · · · , lh} satisfies the following properties: (i) d′(li, lj) ≥0 for all li, lj ∈L, and d′(li, lj) = 0 if and only if i = j; and (ii) d′(li, lj) + d′(li+1, lj+1) ≤d′(li, lj+1) + d′(li+1, lj) for all li, lj ∈L\{lh} (where \ refers to set difference). 3 Complete Rounding and Complete Move We start with a simple rounding scheme, which we call complete rounding. While complete rounding is not very accurate, it would help illustrate the flavor of our results. We will subsequently consider its generalizations, which have been useful in obtaining the best-known approximation factors for various special cases of metric labeling. The complete rounding procedure consists of a single stage where we use the set of all unary variables to obtain a labeling (as opposed to other rounding procedures discussed subsequently). Algorithm 1 describes its main steps. Intuitively, it treats the value of the unary variable ya(i) as the probability of assigning the label li to the random variable Xa. It obtains a labeling by sampling from all the distributions ya = [ya(i), ∀li ∈L] simultaneously using the same random number. It can be shown that using a different random number to sample the distributions ya and yb of two neighboring random variables (Xa, Xb) ∈E results in an infinite approximation factor. For example, let ya(i) = yb(i) = 1/h for all li ∈L, where h is the number of labels. The pairwise variables yab that minimize the energy function are yab(i, i) = 1/h and yab(i, j) = 0 when i ̸= j. For the above feasible solution of the LP relaxation, the RHS of inequality (3) is 0 for any finite F, while the LHS of inequality (3) is strictly greater than 0 if h > 1. However, we will shortly show that using the same random number r for all random variables provides a finite approximation factor. 3 Algorithm 1 The complete rounding procedure. input A feasible solution y of the LP relaxation. 1: Pick a real number r uniformly from [0, 1]. 2: for all Xa ∈X do 3: Define Ya(0) = 0 and Ya(i) = Pi j=1 ya(j) for all li ∈L. 4: Assign the label li ∈L to the random variable Xa if Ya(i −1) < r ≤Ya(i). 5: end for We now turn our attention to designing a move-making algorithm whose multiplicative bound matches the approximation factor of the complete rounding procedure. To this end, we modify the range expansion algorithm proposed in [15] for truncated convex pairwise potentials to a general (semi-)metric distance function. Our method, which we refer to as the complete move-making algorithm, considers all putative labels of all random variables, and provides an approximate solution in a single iteration. Algorithm 2 describes its two main steps. First, it computes a submodular overestimation of the given distance function by solving the following optimization problem: d = argmin d′ t (4) s.t. d′(li, lj) ≤td(li, lj), ∀li, lj ∈L, d′(li, lj) ≥d(li, lj), ∀li, lj ∈L, d′(li, lj) + d′(li+1, lj+1) ≤d′(li, lj+1) + d′(li+1, lj), ∀li, lj ∈L\{lh}. The above problem minimizes the maximum ratio of the estimated distance to the original distance over all pairs of labels, that is, maxi̸=j d′(li, lj)/d(li, lj). We will refer to the optimal value of problem (4) as the submodular distortion of the distance function d. Second, it replaces the original distance function by the submodular overestimation and computes an approximate solution to the original metric labeling problem by solving a single minimum st-cut problem. Note that, unlike the range expansion algorithm [15] that uses the readily available submodular overestimation of a truncated convex distance (namely, the corresponding convex distance function), our approach estimates the submodular overestimation via the LP (4). Since the LP (4) can be solved for any arbitrary distance function, it makes complete move-making more generally applicable. Algorithm 2 The complete move-making algorithm. input Unary potentials θa(·), edge weights wab, distance function d. 1: Compute a submodular overestimation of d by solving problem (4). 2: Using the approach of [6], solve the following problem via an equivalent minimum st-cut problem: ˆx = argmin x∈Ln X Xa∈X θa(xa) + X (Xa,Xb)∈E wabd(xa, xb). The following theorem establishes the theoretical guarantees of the complete move-making algorithm and the complete rounding procedure. Theorem 1. The tight multiplicative bound of the complete move-making algorithm is equal to the submodular distortion of the distance function. Furthermore, the tight approximation factor of the complete rounding procedure is also equal to the submodular distortion of the distance function. In terms of computational complexities, complete move-making is significantly faster than solving the LP relaxation. Specifically, given an MRF with n random variables and m edges, and a label set with h labels, the LP relaxation requires at least O(m3h3log(m2h3)) time, since it consists of O(mh2) optimization variables and O(mh) constraints. In contrast, complete move-making requires O(nmh3log(m)) time, since the graph constructed using the method of [6] consists of O(nh) nodes and O(mh2) arcs. Note that complete move-making also requires us to solve the linear program (4). However, since problem (4) is independent of the unary potentials and the edge weights, it only needs to be solved once beforehand in order to compute the approximate solution for any metric labeling problem defined using the distance function d. 4 4 Interval Rounding and Interval Moves Theorem 1 implies that the approximation factor of the complete rounding procedure is very large for distance functions that are highly non-submodular. For example, consider the truncated linear distance function defined as follows over a label set L = {l1, l2, · · · , lh}: d(li, lj) = min{|i −j|, M}. Here, M is a user specified parameter that determines the maximum distance. The tightest submodular overestimation of the above distance function is the linear distance function, that is, d(li, lj) = |i −j|. This implies that the submodular distortion of the truncated linear metric is (h −1)/M, and therefore, the approximation factor for the complete rounding procedure is also (h −1)/M. In order to avoid this large approximation factor, Chekuri et al. [5] proposed an interval rounding procedure, which captures the intuition that it is beneficial to assign similar labels to as many random variables as possible. Algorithm 3 provides a description of interval rounding. The rounding procedure chooses an interval of at most q consecutive labels (step 2). It generates a random number r (step 3), and uses it to attempt to assign labels to previously unlabeled random variables from the selected interval (steps 4-7). It can be shown that the overall procedure converges in a polynomial number of iterations with a probability of 1 [5]. Note that if we fix q = h and z = 1, interval rounding becomes equivalent to complete rounding. However, the analyses in [5, 10] shows that other values of q provide better approximation factors for various special cases. Algorithm 3 The interval rounding procedure. input A feasible solution y of the LP relaxation. 1: repeat 2: Pick an integer z uniformly from [−q + 2, h]. Define an interval of labels I = {ls, · · · , le}, where s = max{z, 1} is the start index and e = min{z + q −1, h} is the end index. 3: Pick a real number r uniformly from [0, 1]. 4: for all Unlabeled random variables Xa do 5: Define Ya(0) = 0 and Ya(i) = Ps+i−1 j=s ya(j) for all i ∈{1, · · · , e −s + 1}. 6: Assign the label ls+i−1 ∈I to the Xa if Ya(i −1) < r ≤Ya(i). 7: end for 8: until All random variables have been assigned a label. Our goal is to design a move-making algorithm whose multiplicative bound matches the approximation factor of interval rounding for any choice of q. To this end, we propose the interval move-making algorithm that generalizes the range expansion algorithm [15], originally proposed for truncated convex distances, to arbitrary distance functions. Algorithm 4 provides its main steps. The central idea of the method is to improve a given labeling ˆx by allowing each random variable Xa to either retain its current label ˆxa or to choose a new label from an interval of consecutive labels. In more detail, let I = {ls, · · · , le} ⊆L be an interval of labels of length at most q (step 4). For the sake of simplicity, let us assume that ˆxa /∈I for any random variable Xa. We define Ia = I S{ˆxa} (step 5). For each pair of neighboring random variables (Xa, Xb) ∈E, we compute a submodular distance function dˆxa,ˆxb : Ia × Ib →R+ by solving the following linear program (step 6): dˆxa,ˆxb = argmin d′ t (5) s.t. d′(li, lj) ≤td(li, lj), ∀li ∈Ia, lj ∈Ib, d′(li, lj) ≥d(li, lj), ∀li ∈Ia, lj ∈Ib, d′(li, lj) + d′(li+1, lj+1) ≤d′(li, lj+1) + d′(li+1, lj), ∀li, lj ∈I\{le}, d′(li, le) + d′(li+1, ˆxb) ≤d′(li, ˆxb) + d′(li+1, le), ∀li ∈I\{le}, d′(le, lj) + d′(ˆxa, lj+1) ≤d′(le, lj+1) + d′(ˆxa, lj), ∀lj ∈I\{le}, d′(le, le) + d(ˆxa, ˆxb) ≤d′(le, ˆxb) + d′(ˆxa, le). Similar to problem (4), the above problem minimizes the maximum ratio of the estimated distance to the original distance. However, instead of introducing constraints for all pairs of labels, it is only 5 considers pairs of labels li and lj where li ∈Ia and lj ∈Ib. Furthermore, it does not modify the distance between the current labels ˆxa and ˆxb (as can be seen in the last constraint of problem (5)). Given the submodular distance functions dˆxa,ˆxb, we can compute a new labeling x by solving the following optimization problem via minimum st-cut using the method of [6] (step 7): x = argmin x X Xa∈X θa(xa) + X (Xa,Xb)∈E wabdˆxa,ˆxb(xa, xb) s.t. xa ∈Ia, ∀Xa ∈X. (6) If the energy of the new labeling x is less than that of the current labeling ˆx, then we update our labeling to x (steps 8-10). Otherwise, we retain the current estimate of the labeling and consider another interval. The algorithm converges when the energy does not decrease for any interval of length at most q. Note that, once again, the main difference between interval move-making and the range expansion algorithm is the use of an appropriate optimization problem, namely the LP (5), to obtain a submodular overestimation of the given distance function. This allows us to use interval move-making for the general metric labeling problem, instead of focusing on only truncated convex models. Algorithm 4 The interval move-making algorithm. input Unary potentials θa(·), edge weights wab, distance function d, initial labeling x0. 1: Set current labeling to initial labeling, that is, ˆx = x0. 2: repeat 3: for all z ∈[−q + 2, h] do 4: Define an interval of labels I = {ls, · · · , le}, where s = max{z, 1} is the start index and e = min{z + q −1, h} is the end index. 5: Define Ia = I S{ˆxa} for all random variables Xa ∈X. 6: Obtain submodular overestimates dˆxa,ˆxb for each pair of neighboring random variables (Xa, Xb) ∈E by solving problem (5). 7: Obtain a new labeling x by solving problem (6). 8: if Energy of x is less than energy of ˆx then 9: Update ˆx = x. 10: end if 11: end for 12: until Energy cannot be decreased further. The following theorem establishes the theoretical guarantees of the interval move-making algorithm and the interval rounding procedure. Theorem 2. The tight multiplicative bound of the interval move-making algorithm is equal to the tight approximation factor of the interval rounding procedure. An interval move-making algorithm that uses an interval length of q runs for at most O(h/q) iterations. This follows from a simple modification of the result by Gupta and Tardos [8] (specifically, theorem 3.7). Hence, the total time complexity of interval move-making is O(nmhq2log(m)), since each iteration solves a minimum st-cut problem of a graph with O(nq) nodes and O(mq2) arcs. In other words, interval move-making is at most as computationally complex as complete move-making, which in turn is significantly less complex than solving the LP relaxation. Note that problem (5), which is required for interval move-making, is independent of the unary potentials and the edge weights. Hence, it only needs to be solved once beforehand for all pairs of labels (ˆxa, ˆxb) ∈L × L in order to obtain a solution for any metric labeling problem defined using the distance function d. 5 Hierarchical Rounding and Hierarchical Moves We now consider the most general form of parallel rounding that has been proposed in the literature, namely the hierarchical rounding procedure [10]. The rounding relies on a hierarchical clustering of the labels. Formally, we denote a hierarchical clustering of m levels for the label set L by C = {C(i), i = 1, · · · , m}. At each level i, the clustering C(i) = {C(i, j) ⊆L, j = 1, · · · , hi} is 6 mutually exclusive and collectively exhaustive, that is, [ j C(i, j) = L, C(i, j) ∩C(i, j′) = ∅, ∀j ̸= j′. Furthermore, for each cluster C(i, j) at the level i > 2, there exists a unique cluster C(i −1, j′) in the level i −1 such that C(i, j) ⊆C(i −1, j′). We call the cluster C(i −1, j′) the parent of the cluster C(i, j) and define p(i, j) = j′. Similarly, we call C(i, j) a child of C(i −1, j′). Without loss of generality, we assume that there exists a single cluster at level 1 that contains all the labels, and that each cluster at level m contains a single label. Algorithm 5 The hierarchical rounding procedure. input A feasible solution y of the LP relaxation. 1: Define f 1 a = 1 for all Xa ∈X. 2: for all i ∈{2, · · · , m} do 3: for all Xa ∈X do 4: Define zi a(j) for all j ∈{1, · · · , hi} as follows: zi a(j) = P k,lk∈C(i,j) ya(k) if p(i, j) = f i−1 a , 0 otherwise. 5: Define yi a(j) for all j ∈{1, · · · , hi} as follows: yi a(j) = zi a(j) Phi j′=1 zia(j′) 6: end for 7: Using a rounding procedure (complete or interval) on yi = [yi a(j), ∀Xa ∈X, j ∈ {1, · · · , hi}], obtain an integer solution ˆyi. 8: for all Xa ∈X do 9: Let ka ∈{1, · · · , hi} such that ˆyi(ka) = 1. Define f i a = ka. 10: end for 11: end for 12: for all Xa ∈X do 13: Let lk be the unique label present in the cluster C(m, f m a ). Assign lk to Xa. 14: end for Algorithm 5 describes the hierarchical rounding procedure. Given a clustering C, it proceeds in a top-down fashion through the hierarchy while assigning each random variable to a cluster in the current level. Let f i a be the index of the cluster assigned to the random variable Xa in the level i. In the first step, the rounding procedure assigns all the random variables to the unique cluster C(1, 1) (step 1). At each step i, it assigns each random variable to a unique cluster in the level i by computing a conditional probability distribution as follows. The conditional probability yi a(j) of assigning the random variable Xa to the cluster C(i, j) is proportional to P lk∈C(i,j) ya(k) if p(i, j) = f i−1 a (steps 3-6). The conditional probability yi a(j) = 0 if p(i, j) ̸= f i−1 a , that is, a random variable cannot be assigned to a cluster C(i, j) if it wasn’t assigned to its parent in the previous step. Using a rounding procedure (complete or interval) for yi, we obtain an assignment of random variables to the clusters at level i (step 7). Once such an assignment is obtained, the values f i a are computed for all random variables Xa (steps 8-10). At the end of step m, hierarchical rounding would have assigned each random variable to a unique cluster in the level m. Since each cluster at level m consists of a single label, this provides us with a labeling of the MRF (steps 12-14). Our goal is to design a move-making algorithm whose multiplicative bound matches the approximation factor of the hierarchical rounding procedure for any choice of hierarchical clustering C. To this end, we propose the hierarchical move-making algorithm, which extends the hierarchical graph cuts approach for hierarchically well-separated tree (HST) metrics proposed in [14]. Algorithm 6 provides its main steps. In contrast to hierarchical rounding, the move-making algorithm traverses the hierarchy in a bottom-up fashion while computing a labeling for each cluster in the current level. Let xi,j be the labeling corresponding to the cluster C(i, j). At the first step, when considering the level m of the clustering, all the random variables are assigned the same label. Specifically, xm,j a 7 Algorithm 6 The hierarchical move-making algorithm. input Unary potentials θa(·), edge weights wab, distance function d. 1: for all j ∈{1, · · · , h} do 2: Let lk be the unique label is the cluster C(m, j). Define xm,j a = lk for all Xa ∈X. 3: end for 4: for all i ∈{2, · · · , m} do 5: for all j ∈{1, · · · , hm−i+1} do 6: Define Lm−i+1,j a = {xm−i+2,j′ a , p(m −i + 2, j′) = j, j′ ∈{1, · · · , hm−i+2}}. 7: Using a move-making algorithm (complete or interval), compute the labeling xm−i+1,j under the constraint xm−i+1,j a ∈Lm−i+1,j a . 8: end for 9: end for 10: The final solution is x1,1. is equal to the unique label contained in the cluster C(m, j) (steps 1-3). At step i, it computes the labeling xm−i+1,j for each cluster C(m −i + 1, j) by using the labelings computed in the previous step. Specifically, it restricts the label assigned to a random variable Xa in the labeling xm−i+1,j to the subset of labels that were assigned to it by the labelings corresponding to the children of C(m −i + 1, j) (step 6). Under this restriction, the labeling xm−i+1,j is computed by approximately minimizing the energy using a move-making algorithm (step 7). Implicit in our description is the assumption that that we will use a move-making algorithm (complete or interval) in step 7 of Algorithm 6 whose multiplicative bound matches the approximation factor of the rounding procedure (complete or interval) used in step 7 of Algorithm 5. Note that, unlike the hierarchical graph cuts approach [14], the hierarchical move-making algorithm can be used for any arbitrary clustering and not just the one specified by an HST metric. The following theorem establishes the theoretical guarantees of the hierarchical move-making algorithm and the hierarchical rounding procedure. Theorem 3. The tight multiplicative bound of the hierarchical move-making algorithm is equal to the tight approximation factor of the hierarchical rounding procedure. Note that hierarchical move-making solves a series of problems defined on a smaller label set. Since the complexity of complete and interval move-making is superlinear in the number of labels, it can be verified that the hierarchical move-making algorithm is at most as computationally complex as the complete move-making algorithm (corresponding to the case when the clustering consists of only one cluster that contains all the labels). Hence, hierarchical move-making is significantly faster than solving the LP relaxation. 6 Discussion For any general distance function that can be used to specify the (semi-)metric labeling problem, we proved that the approximation factor of a large family of parallel rounding procedures is matched by the multiplicative bound of move-making algorithms. This generalizes previously known results on the guarantees of move-making algorithms in two ways: (i) in contrast to previous results [14, 15, 20] that focused on special cases of distance functions, our results are applicable to arbitrary semi-metric distance functions; and (ii) the guarantees provided by our theorems are tight. Our experiments (described in the technical report) confirm that the rounding-based move-making algorithms provide similar accuracy to the LP relaxation, while being significantly faster due to the use of efficient minimum st-cut solvers. Several natural questions arise. What is the exact characterization of the rounding procedures for which it is possible to design matching move-making algorithms? Can we design rounding-based move-making algorithms for other combinatorial optimization problems? Answering these questions will not only expand our theoretical understanding, but also result in the development of efficient and accurate algorithms. Acknowledgements. This work is funded by the European Research Council under the European Community’s Seventh Framework Programme (FP7/2007-2013)/ERC Grant agreement number 259112. 8 References [1] A. Archer, J. Fakcharoenphol, C. Harrelson, R. Krauthgamer, K. Talvar, and E. Tardos. Approximate classification via earthmover metrics. In SODA, 2004. [2] Y. Boykov and V. Kolmogorov. An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. PAMI, 2004. [3] Y. Boykov, O. Veksler, and R. Zabih. Markov random fields with efficient approximations. In CVPR, 1998. [4] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. In ICCV, 1999. [5] C. Chekuri, S. Khanna, J. Naor, and L. Zosin. Approximation algorithms for the metric labeling problem via a new linear programming formulation. In SODA, 2001. [6] B. Flach and D. Schlesinger. Transforming an arbitrary minsum problem into a binary one. Technical report, TU Dresden, 2006. [7] A. Globerson and T. Jaakkola. Fixing max-product: Convergent message passing algorithms for MAP LP-relaxations. In NIPS, 2007. [8] A. Gupta and E. Tardos. A constant factor approximation algorithm for a class of classification problems. In STOC, 2000. [9] T. Hazan and A. Shashua. Convergent message-passing algorithms for inference over general graphs with convex free energy. In UAI, 2008. [10] J. Kleinberg and E. Tardos. Approximation algorithms for classification problems with pairwise relationships: Metric labeling and Markov random fields. In STOC, 1999. [11] V. Kolmogorov. Convergent tree-reweighted message passing for energy minimization. PAMI, 2006. [12] N. Komodakis, N. Paragios, and G. Tziritas. MRF optimization via dual decomposition: Message-passing revisited. In ICCV, 2007. [13] A. Koster, C. van Hoesel, and A. Kolen. The partial constraint satisfaction problem: Facets and lifting theorems. Operations Research Letters, 1998. [14] M. P. Kumar and D. Koller. MAP estimation of semi-metric MRFs via hierarchical graph cuts. In UAI, 2009. [15] M. P. Kumar and P. Torr. Improved moves for truncated convex models. In NIPS, 2008. [16] P. Ravikumar, A. Agarwal, and M. Wainwright. Message-passing for graph-structured linear programs: Proximal projections, convergence and rounding schemes. In ICML, 2008. [17] M. Schlesinger. Syntactic analysis of two-dimensional visual signals in noisy conditions. Kibernetika, 1976. [18] R. Szeliski, R. Zabih, D. Scharstein, O. Veksler, V. Kolmogorov, A. Agarwala, M. Tappen, and C. Rother. A comparative study of energy minimization methods for Markov random fields with smoothness-based priors. PAMI, 2008. [19] D. Tarlow, D. Batra, P. Kohli, and V. Kolmogorov. Dynamic tree block coordinate ascent. In ICML, 2011. [20] O. Veksler. Efficient graph-based energy minimization methods in computer vision. PhD thesis, Cornell University, 1999. [21] O. Veksler. Graph cut based optimization for MRFs with truncated convex priors. In CVPR, 2007. [22] M. Wainwright, T. Jaakkola, and A. Willsky. MAP estimation via agreement on trees: Message passing and linear programming. Transactions on Information Theory, 2005. [23] Y. Weiss, C. Yanover, and T. Meltzer. MAP estimation, linear programming and belief propagation with convex free energies. In UAI, 2007. [24] T. Werner. A linear programming approach to max-sum problem: A review. PAMI, 2007. [25] T. Werner. Revisting the linear programming relaxation approach to Gibbs energy minimization and weighted constraint satisfaction. PAMI, 2010. 9
|
2014
|
159
|
5,246
|
Analog Memories in a Balanced Rate-Based Network of E-I Neurons Dylan Festa Guillaume Hennequin M´at´e Lengyel df325@cam.ac.uk gjeh2@cam.ac.uk m.lengyel@eng.cam.ac.uk Computational & Biological Learning Lab, Department of Engineering University of Cambridge, UK Abstract The persistent and graded activity often observed in cortical circuits is sometimes seen as a signature of autoassociative retrieval of memories stored earlier in synaptic efficacies. However, despite decades of theoretical work on the subject, the mechanisms that support the storage and retrieval of memories remain unclear. Previous proposals concerning the dynamics of memory networks have fallen short of incorporating some key physiological constraints in a unified way. Specifically, some models violate Dale’s law (i.e. allow neurons to be both excitatory and inhibitory), while some others restrict the representation of memories to a binary format, or induce recall states in which some neurons fire at rates close to saturation. We propose a novel control-theoretic framework to build functioning attractor networks that satisfy a set of relevant physiological constraints. We directly optimize networks of excitatory and inhibitory neurons to force sets of arbitrary analog patterns to become stable fixed points of the dynamics. The resulting networks operate in the balanced regime, are robust to corruptions of the memory cue as well as to ongoing noise, and incidentally explain the reduction of trial-to-trial variability following stimulus onset that is ubiquitously observed in sensory and motor cortices. Our results constitute a step forward in our understanding of the neural substrate of memory. 1 Introduction Memories are thought to be encoded in the joint, persistent activity of groups of neurons. According to this view, memories are embedded via long-lasting modifications of the synaptic connections between neurons (storage) such that partial or noisy initialization of the network activity drives the collective dynamics of the neurons into the corresponding memory state (recall) [1]. Models of memory circuits following these principles abound in the theoretical neuroscience literature, but few respect some of the most fundamental properties of brain networks, including: i) the separation of neurons into distinct classes of excitatory (E) and inhibitory (I) cells – known as Dale’s law –, ii) the presence of recurrent and sparse synaptic connections, iii) the possibility for each neuron to sustain graded levels of activity in different memories, iv) the firing of action potentials at reasonably low rates, and v) a dynamic balance of E and I inputs. In the original Hopfield network [1], connectivity must be symmetrical, which violates Dale’s law. Moreover, just as in much of the work following it up, memories are encoded in binary neuronal responses and so converge towards effectively binary recall states even if the recall dynamics formally uses graded activities [2]. Subsequent work considered non-binary pattern distributions [3, 4], and derived high theoretical capacity limits for them, but those capacities proved difficult – if not impossible – to realise in practice [5, 6], and the network dynamics therein did not explicitly model inhibitory neurons thus implicitly assuming instantaneous inhibitory feedback. More recent work 1 5 Hz memories a b c 20 Hz exc. (prescribed distribution) 0 10 20 30 inh. (optimized distribution) exc. neurons inh. neurons firing rate [Hz] Figure 1: (a) Examples of analog patterns of excitatory neuronal activities, drawn from a log-normal distribution. In all our training experiments, network parameters were optimized to stabilize a set of such analog patterns and the baseline, uniform activity state (top row). For ease of visualization, only 30 of the 100 excitatory neurons are shown. (b) Optimized values of the inhibitory (auxiliary) neuronal firing rates for 5 of 30 learned memories (corresponding to those in panel a). Only 30 of the 50 auxiliary neurons are shown. (c) Empirical distributions of firing rates across neurons and memory patterns, for each population. incorporated Dale’s law, and described neurons using the more realistic, leaky integrate-and-fire (LIF) neuron model [7]. However, the stability of the recall states still relied critically on the saturating behavior of the LIF input-output transfer function at high rates. Although it was later shown that dynamic feedback inhibition can stabilize relatively low firing rates in subpopulations of more tightly connected neurons [8, 9], inhibitory feedback in these models is global, and calibrated for a single stereotypical level of excitation for all memories, implying effectively binary memories again. Finally, spatially connected networks are able to sustain graded activity patterns (spatial “bumps”), but make strong assumptions about the spatial structure of both the connectivity and the memory patterns, and are sensitive to ongoing noise (e.g. [10, 11]). Ref. [12] provides a rare example of spike timing-based graded memory network, but it again did not contain inhibitory units. Here we propose a general control-theoretic framework that overcomes all of the above limitations with minimal additional assumptions. We formalize memory storage as implying two conditions: that the desired activity states be fixed points of the dynamics, and that the dynamics be stable around those fixed points. We directly optimize the network parameters, including the synaptic connectivity, to satisfy both conditions for a collection of arbitrary, graded memory patterns (Fig. 1). The fixed point condition is achieved by minimizing the time derivative of the neural activity, such that ideally it reaches zero, at each of the desired attractor states. Stability, however, is more difficult to achieve because the fixed-point constraints tend to create strong positive feedback loops in the recurrent circuitry, and direct measures of dynamical stability (eg. the spectral abscissa) do not admit efficient, gradient-based optimization. Thus, we use recently developed methods from robust control theory, namely the minimization of the Smoothed Spectral Abscissa (SSA, [13, 14]) to perform robust stability optimization. To satisfy biological constraints, we parametrize the networks that we optimize such that they have realistic firing rate dynamics and their connectivities obey Dale’s law. We show that despite these constraints the resulting networks perform memory recall that is robust to noise in both the recall cue and the ongoing dynamics, and is stabilized through a tight dynamic balance of excitation and inhibition. This novel way of constructing structurally realistic memory networks should open new routes to the understanding of memory and its neural substrate. 2 Methods We study a network of n = nE (excitatory) +nI (inhibitory) neurons. The activity of neuron i is represented by a single scalar potential vi, which is converted into a firing rate ri via a thresholdquadratic gain function (e.g. [15]): ri = g(vi) := γv2 i if vi > 0 0 otherwise (1) 2 We set γ to 0.04, such that g(vi) spans a few tens of Hz when vi spans a few tens of mV, as experimentally observed in cortical areas (e.g. cat’s V1 [16]). The instantaneous state of the system can be expressed as a vector v(t) := (v1(t), . . . , vn(t)). We denote the activity of the excitatory or inhibitory subpopulation by vexc and vinh, respectively. The recurrent interactions between neurons are governed by a synaptic weight matrix W, in which the sign of each element Wij depends on the nature (excitatory or inhibitory) of the presynaptic neuron j. We enforce Dale’s law via a reparameterization of the synaptic weights: Wij = sj log(1 + exp βij) with sj = +1 if j ≤nE −1 otherwise (2) where the βij’s are free, unconstrained parameters. (We do not allow for autapses, i.e. we fix Wii = 0). The network dynamics are thus given by: τi dvi dt = −vi + n X j=1 Wij g(vj) + hi , (3) where τi is the membrane time constant, and hi is a constant external input, independent of the memory we wish to recall. It is worth noting that, since the gain function g(vi) defined in Eq (1) has no upper saturation, recurrent interactions can easily result in runaway excitation and firing rates growing unbounded. However, our optimization algorithm will naturally seek stable solutions, in which firing rates are kept within a limited range due to a fine dynamic balance of excitation and inhibition [14]. Optimizing network parameters to embed attractor memories We are going to build and study networks that have a desired set of analog activity patterns as stable fixed points of their dynamics. Let {vµ exc}µ=1,...,m be a set of m target analog patterns (Fig. 1), defined in the space of excitatory neuronal activity (potentials). For a given pattern µ, the inhibitory neurons will be free to adjust their steady state firing rates vµ inh to whatever pattern proves to be optimal to maintain stability. In other words, we think of the activity of inhibitory neurons as “auxiliary” variables. A given activity pattern vµ ≡(vµ⊤ exc, vµ⊤ inh)⊤is a stable fixed point of the network dynamics if, and only if, it satisfies the following two conditions: dv dt v=vµ = 0 and α (Jµ) < 0 (4) where Jµ is the Jacobian matrix of the dynamics in Eq. 3, i.e. Jµ ij := Wij g′(vµ j ) −δij (Kronecker’s delta), and α(Jµ) denotes the spectral abscissa (SA), defined as the largest real part in the eigenvalue spectrum of Jµ. The first condition makes vµ a fixed point of the dynamics, while the second condition makes that fixed point asymptotically stable with respect to small local perturbations. Note that the width of the basin of attraction is not captured by the SA. The two conditions in Eq. 4 depend on a set of network parameters that we will allow ourselves to optimize. These are all the synaptic weight parameters (βij, i ̸= j), as well as the values of the inhibitory neurons’ firing rates in each attractor (vµ inh, µ = 1, . . . , m). Thus, we may adjust a total of n(n −1) + nI m parameters. Using Eq. 3, the first condition in Eq. 4 can be rewritten as vµ i −Pn j=1 Wijg(vµ j ) −hi = 0. Despite this equation being linear in the synaptic weights, the re-parameterization of Eq. 2 makes it nonlinear in β, and it is in any case nonlinear in vµ inh. We will therefore seek to satisfy this condition by minimizing ∥dv/dt|v=vµ ∥2, which quantifies how fast the potentials drift away when initialized in the desired attractor state vµ. When it is zero, vµ is a fixed point of the dynamics. Our optimization procedure (see below) may not be able to set this term to exactly zero, especially as we try to store a large number of memories, but in practice we find it becomes small enough that the Jacobian-based stability criterion remains valid. Meeting the stability condition (second condition in Eq. 4) turns out to be more involved. The SA is, in general, a non-smooth function of the matrix elements and is therefore difficult to minimize. 3 A more suitable stability measure has been introduced recently in the context of robust control theory [13, 14], called the Smoothed Spectral Abscissa (SSA), which we will use here and denote by ˜αε(Jµ). The SSA, defined for some smoothness parameter ε > 0, is a differentiable relaxation of the SA, with the properties α(Jµ) < ˜αε(Jµ) and limε→0 ˜αε(Jµ) = α(Jµ). Therefore, the criterion ˜αε(Jµ) ≤0 implies α(Jµ) < 0, and can therefore be used as an indication of local stability. Both the SSA and its gradient are straightforward to evaluate numerically, making it amenable to minimization through gradient descent. Note that the SSA depends on the Jacobian matrix elements {Jµ ij}, which in turn depend both on the connectivity parameters {βij} and on vµ inh. Note also that the parameter ε > 0 controls how tightly the SSA hugs the SA. Small values make it a tight upper bound, with increasingly ill-behaved gradients. Large values imply more smoothness, but may no longer guarantee that the SSA has a negative minimum even though the SA might have one. In our system of n = 150 neurons we found ε = 0.01 to yield a good compromise. In the general case the distance between SA and SSA grows linearly with the number of dimensions.To keep it invariant, ε should be scaled accordingly. We therefore used the following heuristic rule ε = 0.01 · 150/n. We summarize the above objective into a global cost function by lumping together the fixed point and stability conditions, summing over the entire set of m target memory patterns, and adding an L2 penalty term on the synaptic weights to regularize: ψ ({βij}, {vµ inh}) := 1 m m X µ=1 1 n
dv dt
2 v=vµ + ηs˜αε (Jµ) ! + ηF n2 ∥W∥2 F . (5) where ∥W∥2 F is the squared Frobenius norm of W, i.e. the sum of its squared elements, and the parameters ηs and ηF control the relative importance of each component of the objective function. We set them heuristically (Table 1). We used a variant of the low-storage BFGS algorithm included in the open source library NLopt [17] to minimize ψ. Choice of initial parameters and attractors The synaptic weights are initially drawn randomly from a Gamma distribution with a shape factor of 2 and a mean that depends only on the type of pre- and post-synaptic population. The mean synaptic weights of the four synapse types were computed using a mean-field reduction of the full network to meet the condition that the network initially exhibits a stable baseline state vµ=1 exc in which all excitatory firing rates equal rbaseline = 5 Hz (Table 1, and Supplementary Material). This baseline state was included in every set of m target attractors that we used and was thus stable from the beginning, by construction. For the remaining target patterns, {vµ exc}µ=2,...,m were generated by inverting (using g−1) firing rates that were sampled from a log-normal distribution with a mean matching the baseline firing rate, rbaseline (Fig. 1a) and a variance of 5 Hz. This log-normal distribution was chosen to roughly capture the skewed and heavy-tailed nature of firing rate distributions observed in vivo (see e.g. for a review [18]). The inhibitory potentials in the memory states, {vµ inh}, were initialized to the baseline, g−1(5 Hz), and were subsequently used as free parameters by the learning algorithm (cf. above; see also Fig. 1b). 3 Results Example of successful storage Figure 2 shows an example of stability optimization: in this specific run we used 150 neurons to embed 30 graded attractors (examples of which where shown in Fig. 1), yielding a storage capacity of 0.2. Other parameters are listed in Table 1. Gradient descent gradually reduces each of the attractorspecific sub-objectives in Eq. 5, namely the SSA, the SA, and the potential velocities ∥dv/dt∥2 in each target state (Fig. 2). After convergence, the SSA has become negative for all desired states, indicating stability. Note, however, that ∥dv/dt∥after convergence is small but non-zero in each of the target memories. Thus, strictly speaking, the target patterns haven’t become fixed points of the dynamics, but only slow points from which the system will eventually drift away. In practice though, we found that stability was robust enough that an exact, stable fixed point had in fact been created very near each target pattern. This is detailed below. 4 0 20 40 60 −1 −0.5 0 time (hours) SA / SSA ⟨˜αε (Jµ)⟩µ ⟨α (Jµ)⟩µ m = 30 m = 50 a 0 20 40 60 10−4 10−2 time (hours) D
˙v(µ)
2E µ b Figure 2: (a) Decrease of the SA (solid line) and of the SSA (dotted line) during learning in systems with 30 (purple) and 50 attractors (orange). Thick lines show averages across attractors, flanking lines show the corresponding standard deviations. The x-axis marks the actual duration of the run of the learning algorithm. (b) Euclidean norm of the velocity at the fixed point during learning. Lines and colors as in a. Note the logarithmic y-axis. Table 1: Parameter settings nE 100 τE 20 ms ηs 0.02 nI 50 τI 10 ms ηF 0.001 m 30 rbaseline 5 Hz Memory recall performance and robustness For recall, we initialize neuronal activities at a noisy version of one of the target patterns, and study the subsequent evolution of the network state. The network performs well if its dynamics clean up the noise and home in on the target pattern (autoassociative behavior) and if it achieves this robustly even in the face of large amounts of noise. Initial cues are chosen to be linear combinations of the form r(t = 0) = σ ˜r + (1 −σ) rµ, where rµ is the memory we intend to recall and ˜r is an independent random vector with the same lognormal statistics used to generate the memory patterns themselves. The parameter σ regulates the noise level: σ = 0 sets the network activity directly in the desired attractor, while σ = 1 initializes it with completely random values. The deviation of the momentary network state r(t) ≡g(v(t)) from the target pattern rµ ≡g(vµ) is measured in terms of the squared Euclidean distance, further normalized by the expected squared distance between rµ and a random pattern drawn from the same distribution (log-normal in our case). Formally: dµ(t) := ∥rexc(t) −rµ exc∥2 ⟨∥˜rexc −rµ exc∥2⟩˜r . (6) Figure 3a shows the temporal evolution of dµ(t) on a few sample recall trials, for two different noise levels σ. For σ = 0.5, recalls are always successful, as the network state converges to the right target pattern on each trial. For σ = 0.75, the network activity occasionally settles in another, well distinct attractor. We used the convention that a trial is deemed successful if the distance dµ(t) falls below 0.001. (A ∼3 Hz deviation from the target in only one of the 100 exc. neurons, with all other 99 neurons behaving perfectly, would be sufficient to cross this threshold and fail the test.) We further measure performance as the probability of successful recall, which we estimated from many independent trials with different realizations of the noise ˜r in the initial condition (Figure 3b). The network performance is also compared to an “ideal observer” [6] that has direct access to all the stored memories (rather than just their reflection in the synaptic weights) and simply returns that pattern in the training set {rµ} to which the initial cue is closest (Fig. 3b). Thus, as an upper bound on performance, the ideal observer only produces a wrong recall when the added noise brings the initial state closer to an attractor that is different from the target. Remarkably, our network dynamics 5 0 0.1 0.2 0 0.5 1 1.5 2 (a) t (s) dµ(t) σ = 0.50 σ = 0.75 a 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 b dµ(t0) probability of success network ideal memories baseline Figure 3: (a) Example recall trials for a single memory rµ, which is presented to the network at time t = 0 in a corrupted version that is different on every trial, for two different values of the noise level σ (colors). Shown here is the temporal evolution of the momentary distance between the vector of excitatory firing rates rexc(t) and the memory pattern rµ exc. Different lines correspond to different trials. (b) Fraction of trials that converged onto the correct attractor (final distance dµ(t = ∞) < 0.001, cf. text) as a function of the normalized distance between the initial condition and the desired attractor, dµ(t = 0). Thick lines show medians across attractors, flanking thin lines show the 25th and 75th percentiles. The performance of the baseline state is shown separately (orange). The dashed lines show the performance of an “ideal observer”, always selecting the memory closest to the initial condition, for the same trials. (continuous lines) and the ideal observer (dashed lines) have comparable performances. When trying to recall the uniform pattern of baseline activity, the performance appears much better (orange line) both for the ideal observer and the network. This is simply because the random vectors used to perturb the system have a high probability of lying closer to the mean of the log normal distribution (that is, the baseline state) than to any other memory pattern. Moreover, the network was initialized prior to learning with the baseline as the single global attractor, and this might account for the additional tendency of the network (solid orange line) to fall on such state, as compared to the ideal observer (dotted orange line). Only a few strong synaptic weights contribute to memory recall Synaptic weights after learning (Fig. 4a) are sparse: their distribution shows the characteristic peak near zero and the long tail observed in real cortical circuits [19, 20] (Fig. 4b). This sparseness cannot be accounted for by the L2 norm regularizer in the cost function (Eq. 5) as it does not promote sparsity as an L1 term would. Thus, the observed sparsity in the trained network must be a genuine consequence of having optimized the connectivity for robust stability. If we assume that weights |Wij| ≤0.01 correspond to functionally silent synapses, then the trained network contains 52% of silent excitatory synapses and 46% of silent inhibitory ones (Fig. 4c). We wondered if those weak, “silent” synapses are necessary for stability of memory recall, or could be removed altogether without affecting performance. To test that, we clipped those synapses {|Wij| < 0.01} to zero, and computed recall performance again (Fig. 4d). This clipping turns out to slightly shift the position of the attractors in state space, so we increased the distance threshold that defines a successful recall trial to 0.08. The test reveals that one of the attractors loses stability, reducing the average performance. However the remaining 29 attractors are robust to this removal of weak synapses and show near-equal recall performance as above. This demonstrates that small weights, though numerous, are not necessary for competent recall performance. Balanced state As a result of the connection weight distributions and robust stability, the trained network produces a regime in which excitation and inhibition balance each other, precisely tuning each neuron to its target frequency in each attractor. Excitatory and inhibitory inputs are defined as hexc i (t) = Pn j=1⌊Wij⌋+ rj(t) and hinh i (t) = Pn j=1⌊−Wij⌋+ rj(t) so that the difference hexc i (t) −hinh i (t) corresponds to the total recurrent input, i.e. the second term on the r.h.s. of Eq. 3. 6 1 150 1 150 postsynaptic presynaptic exc. inh. -15 -5 -1 -0.1 0 0.1 1 5 15 Wij −10 −5 0 5 10 weight exc. inh. b a 0 0.5 1 0 0.25 0.5 0.75 1 starting distance from attr. success rate clipped full c d 10−4 10−2 100 0 0.5 1 weight Figure 4: (a) Synaptic weight matrix after learning. Note the logarithmic color scale. (b) Distribution of the excitatory (red) and inhibitory (blue) weights. (c) Cumulative weight distribution of absolute weight values. Gray line marks the 0.01 threshold we use to defined “silent” synapses. (d) Performance of the network after clipping the weights below 0.01 to zero (black, median with 25th and 75th percentiles), compared to the performance of the unperturbed network redrawn from Fig. 3 (purple). 0 20 40 60 0 20 40 t (ms) hexc k (t), hinh k (t) k = 3 k = 72 k = 101 a 0 20 40 60 0 20 40 60 hexc k (t∞) hinh k (t∞) k = 3 k = 72 k = 101 b 0 0.5 1 correlation k = 15 c Figure 5: (a) Dynamics of the excitatory and inhibitory inputs during a memory recall trial, for three sample neurons. (b) Scatter plot of steady-state excitatory versus inhibitory inputs. Each dot corresponds to a different memory pattern, and several neurons are shown in different colors. (c) Histogram of E and I input correlations across all memories for each neuron (for example, one value binned in this histogram would be the correlation between all green dots in b). Figure 5a shows the evolution of hexc i (t) and hinh i (t) during a recall trial for one of the stored random attractors, for 3 different neurons. Neuron 3 has rate target of 9Hz, well above average, therefore its excitation is much higher than inhibition. Neuron 72 has a steady state firing rate of 2 Hz, below average: its inhibitory input is greater than the excitatory one, and firing is driven by the external current. Finally, neuron 101 is inhibitory and has a target rate 0, and indeed its inhibitory input is large enough to overwhelm the combined effects of the external and recurrent excitatory inputs. Notably, in all these cases, both E and I input currents are fairly large but cancel each other to leave something smaller, either positive or negative. Figure 5b shows the E vs. I inputs at steady-state across all the embedded attractors, for various neurons plotted in different colors. These E and I inputs tend to be correlated across attractors for every single neuron (dots in Fig. 5 tend to hug the identity line), with relative differences fine-tuned to yield the desired firing rates. These across-attractors E/I correlations are summarized in Fig. 5c as a histogram over neurons. Robustness to ongoing noise and reduction of across-trial variability following recall onset Finally, to probe the system under more realistic dynamics, we added time-varying, Gaussian white noise such that, in an excitatory neuron free from network interactions, the potential would fluctuate 7 0 0.2 0.4 0.6 0.8 1 0 1 2 a t (s) dµ(t) nearest others 0 0.2 0.4 0.6 0.8 1 0 1 2 3 b t (s) ⟨std [vi(t)]⟩i Figure 6: (a) Normalized distance calculated according to Eq. 6 between the network activity and each of the attractors (targeted attractor: green line; others: orange lines) during a noisy recall episode. (b) Trial-to-trial variability, expressed as the standard deviation of a neuron’s activity across multiple repetitions with random initial conditions. At time t = 0.5 s the network receives a pulse in the direction of one target attractor (µ = 2). Gray lines are for single neurons; the black line is an average over the population. with standard deviation 0.33. Figure 6a shows the momentary distance dµ(t) of the network state from the attractor closest to the initial cue (green), and for all other attractors (orange), during a recall trial. It is clear that the system revolves around the desired attractor, performing successful recall despite the ongoing noise. In a second experiment, we ran many trials in which the initialization at time t = 0 was random, while the same spatially patterned stimulation – aligned onto a chosen attractor – is given to the network in each trial at time t = 0.5 sec. Figure 6b shows the standard deviation of the internal state of a neuron across trials, averaged across the neural population. Following stimulus onset, neurons are always pushed towards the target attractor, and this greatly reduces trial-by-trial variability, compared to the initial spontaneous regime in which the neurons would fluctuate around any of the activity levels corresponding to its assigned attractors. Interestingly, such stimulus-induced variability reduction has been observed very broadly across sensory and motor cortical areas [21]. This extends previous work, e.g. [22] and [23], showing variability reduction in a multiple-attractor scenario with effectively binary patterns, to the case of patterns with graded activities. 4 Discussion We have provided a proof of concept that a model cortical networks of E and I neurons can embed multiple analog memories as stable fixed-points of their dynamics. Memories are stable in the face of ongoing noise and corruption of the recall cues. Neuronal activities do not saturate, and indeed, our single-neuron model did not explicitly incorporate an upper saturation mechanism: dynamic feedback inhibition, precisely matched to the level of excitation incurred by each attractor, ensures that each neuron can fire at a relatively low rate during recall. As a result, excitation and inhibition are tightly balanced. We have used a rate-based formulation of the circuit dynamics, which raises the question of the applicability of our method to understanding spiking memory networks. Once the connectivity in the rate model is generated and optimized, it could still be used in a spiking model, provided the gain function we have used here matches that of the single spiking neurons. In this respect, the gain function we have used here is likely an appropriate choice: in physiological conditions, cortical neurons have input-output gain functions that are well approximated by a rectified powerlaw function over their entire dynamic range [24, 25, 26]. An important question for future research is how local synaptic learning rules can achieve the stabilization objective that we have approached here from an optimal, algorithmic viewpoint. Inhibitory synaptic plasticity is a promising candidate, as it has already been shown to enable self-regulation of the spontaneous, baseline activity regime, and also to promote the stable storage of binary memory patterns [27]. More work is required in this direction. Acknowledgements. This work was supported by the Wellcome Trust (GH, ML), the European Union Seventh Framework Programme (FP7/20072013) under grant agreement no. 269921 (BrainScaleS) (DF, ML), and the Swiss National Science Foundation (GH). 8 References [1] Hopfield J. Neural networks and physical systems with emergent collective computational abilities, Proceedings of the national academy of sciences 79:2554, 1982. [2] Hopfield J. Neurons with graded response have collective computational properties like those of two-state neurons, Proceedings of the national academy of sciences 81:3088, 1984. [3] Treves A. Graded-response neurons and information encodings in autoassociative memories, Phys. Rev. A 42:2418, 1990. [4] Treves A, Rolls ET. What determines the capacity of autoassociative memories in the brain?, Network: Computation in Neural Systems 2:371, 1991. [5] Battaglia FP, Treves A. Stable and rapid recurrent processing in realistic autoassociative memories, Neural Comput 10:431, 1998. [6] Lengyel M, Dayan P. Rate- and phase-coded autoassociative memory, In Advances in Neural Information Processing Systems 17, 769, Cambridge, MA, 2005. MIT Press. [7] Amit D, Brunel N. Dynamics of a recurrent network of spiking neurons before and following learning, Network: Computation in Neural Systems 8:373, 1997. [8] Latham P, Nirenberg S. Computing and stability in cortical networks, Neural computation 16:1385, 2004. [9] Roudi Y, Latham PE. A balanced memory network, PLoS Computational Biology 3:e141, 2007. [10] Ben-Yishai R, et al. Theory of orientation tuning in visual cortex, Proc. Natl. Acad. Sci. USA 92:3844, 1995. [11] Goldberg JA, et al. Patterns of ongoing activity and the functional architecture of the primary visual cortex, Neuron 42:489, 2004. [12] Lengyel M, et al. Matching storage and recall: hippocampal spike timing–dependent plasticity and phase response curves, Nature Neuroscience 8:1677, 2005. [13] Vanbiervliet J, et al. The smoothed spectral abscissa for robust stability optimization, SIAM Journal on Optimization 20:156, 2009. [14] Hennequin G, et al. Optimal control of transient dynamics in balanced networks supports generation of complex movements, Neuron 82:1394, 2014. [15] Ahmadian Y, et al. Analysis of the stabilized supralinear network, Neural Comput. 25:1994, 2013. [16] Anderson JS, et al. The contribution of noise to contrast invariance of orientation tuning in cat visual cortex, Science 290:1968, 2000. [17] Johnson SG. The NLopt nonlinear-optimization package, http://ab-initio.mit.edu/nlopt . [18] Roxin A, et al. On the distribution of firing rates in networks of cortical neurons, The Journal of Neuroscience 31:16217, 2011. [19] Song S, et al. Highly nonrandom features of synaptic connectivity in local cortical circuits, PLoS Biol 3: e68, 2005. [20] Lefort S, et al. The excitatory neuronal network of the C2 barrel column in mouse primary somatosensory cortex, Neuron 61:301 , 2009. [21] Churchland MM, et al. Stimulus onset quenches neural variability: a widespread cortical phenomenon, Nat Neurosci 13:369, 2010. [22] Litwin-Kumar A, Doiron B. Slow dynamics and high variability in balanced cortical networks with clustered connections, Nat Neurosci 15:1498, 2012. [23] Deco G, Hugues E. Neural network mechanisms underlying stimulus driven variability reduction, PLoS computational biology 8:e1002395, 2012. [24] Priebe NJ, Ferster D. Direction selectivity of excitation and inhibition in simple cells of the cat primary visual cortex, Neuron 45:133, 2005. [25] Priebe NJ, Ferster D. Mechanisms underlying cross-orientation suppression in cat visual cortex, Nat Neurosci 9:552, 2006. [26] Finn IM, et al. The emergence of contrast-invariant orientation tuning in simple cells of cat visual cortex, Neuron 54:137, 2007. [27] Vogels TP, et al. Inhibitory plasticity balances excitation and inhibition in sensory pathways and memory networks, Science 334:1569, 2011. 9
|
2014
|
16
|
5,247
|
Transportability from Multiple Environments with Limited Experiments: Completeness Results Elias Bareinboim Computer Science UCLA eb@cs.ucla.edu Judea Pearl Computer Science UCLA judea@cs.ucla.edu Abstract This paper addresses the problem of mz-transportability, that is, transferring causal knowledge collected in several heterogeneous domains to a target domain in which only passive observations and limited experimental data can be collected. The paper first establishes a necessary and sufficient condition for deciding the feasibility of mz-transportability, i.e., whether causal effects in the target domain are estimable from the information available. It further proves that a previously established algorithm for computing transport formula is in fact complete, that is, failure of the algorithm implies non-existence of a transport formula. Finally, the paper shows that the do-calculus is complete for the mz-transportability class. 1 Motivation The issue of generalizing causal knowledge is central in scientific inferences since experiments are conducted, and conclusions that are obtained in a laboratory setting (i.e., specific population, domain, study) are transported and applied elsewhere, in an environment that differs in many aspects from that of the laboratory. If the target environment is arbitrary, or drastically different from the study environment, no causal relations can be learned and scientific progress will come to a standstill. However, the fact that scientific experimentation continues to provide useful information about our world suggests that certain environments share common characteristics and that, owed to these commonalities, causal claims would be valid even where experiments have never been performed. Remarkably, the conditions under which this type of extrapolation can be legitimized have not been formally articulated until very recently. Although the problem has been extensively discussed in statistics, economics, and the health sciences, under rubrics such as “external validity” [1, 2], “metaanalysis” [3], “quasi-experiments” [4], “heterogeneity” [5], these discussions are limited to verbal narratives in the form of heuristic guidelines for experimental researchers – no formal treatment of the problem has been attempted to answer the practical challenge of generalizing causal knowledge across multiple heterogeneous domains with disparate experimental data as posed in this paper. The lack of sound mathematical machinery in such settings precludes one of the main goals of machine learning (and by and large computer science), which is automating the process of discovery. The class of problems of causal generalizability is called transportability and was first formally articulated in [6]. We consider the most general instance of transportability known to date that is the problem of transporting experimental knowledge from heterogeneous settings to a certain specific target. [6] introduced a formal language for encoding differences and commonalities between domains accompanied with necessary or sufficient conditions under which transportability of empirical findings is feasible between two domains, a source and a target; then, these conditions were extended for a complete characterization for transportability in one domain with unrestricted experimental data [7, 8]. Subsequently, assumptions were relaxed to consider settings when only limited experiments are available in the source domain [9, 10], further for when multiple source domains 1 with unrestricted experimental information are available [11, 12], and then for multiple heterogeneous sources with limited and distinct experiments [13], which was called “mz-transportability”.1 Specifically, the mz-transportability problem concerns with the transfer of causal knowledge from a heterogeneous collection of source domains Π = {π1, ..., πn} to a target domain π∗. In each domain πi ∈Π, experiments over a set of variables Zi can be performed, and causal knowledge gathered. In π∗, potentially different from πi, only passive observations can be collected (this constraint will be weakened). The problem is to infer a causal relationship R in π∗using knowledge obtained in Π. The problem studied here generalizes the one-dimensional version of transportability with limited scope and the multiple dimensional with unlimited scope previously studied. Interestingly, while certain effects might not be individually transportable to the target domain from the experiments in any of the available sources, combining different pieces from the various sources may enable their estimation. Conversely, it is also possible that effects are not estimable from multiple experiments in individual domains, but they are from experiments scattered throughout domains (discussed below). The goal of this paper is to formally understand the conditions causal effects in the target domain are (non-parametrically) estimable from the available data. Sufficient conditions for “mztransportability” were given in [13], but this treatment falls short of providing guarantees whether these conditions are also necessary, should be augmented, or even replaced by more general ones. This paper establishes the following results: • A necessary and sufficient condition for deciding when causal effects in the target domain are estimable from both the statistical information available and the causal information transferred from the experiments in the domains. • A proof that the algorithm proposed in [13] is in fact complete for computing the transport formula, that is, the strategy devised for combining the empirical evidence to synthesize the target relation cannot be improved upon. • A proof that the do-calculus is complete for the mz-transportability class. 2 Background in Transportability In this section, we consider other transportability instances and discuss the relationship with the mz-transportability setting. Consider Fig. 1(a) in which the node S represents factors that produce differences between source and target populations. We conduct a randomized trial in Los Angeles (LA) and estimate the causal effect of treatment X on outcome Y for every age group Z = z, denoted by P(y|do(x), z). We now wish to generalize the results to the population of New York City (NYC), but we find the distribution P(x, y, z) in LA to be different from the one in NYC (call the latter P ∗(x, y, z)). In particular, the average age in NYC is significantly higher than that in LA. How are we to estimate the causal effect of X on Y in NYC, denoted R = P ∗(y|do(x))? 2 3 The selection diagram – overlapping of the diagrams in LA and NYC – for this example (Fig. 1(a)) conveys the assumption that the only difference between the two populations are factors determining age distributions, shown as S →Z, while age-specific effects P ∗(y|do(x), Z = z) are invariant across populations. Difference-generating factors are represented by a special set of variables called selection variables S (or simply S-variables), which are graphically depicted as square nodes (■). From this assumption, the overall causal effect in NYC can be derived as follows: R = X z P ∗(y|do(x), z)P ∗(z) = X z P(y|do(x), z)P ∗(z) (1) The last line constitutes a transport formula for R; it combines experimental results obtained in LA, P(y|do(x), z), with observational aspects of NYC population, P ∗(z), to obtain a causal claim 1Traditionally, the machine learning literature has been concerned about discrepancies among domains in the context, almost exclusively, of predictive or classification tasks as opposed to learning causal or counterfactual measures [14, 15]. Interestingly, recent work on anticausal learning leverages knowledge about invariances of the underlying data-generating structure across domains, moving the literature towards more general modalities of learning [16, 17]. 2We will use Px(y | z) interchangeably with P(y | do(x), z). 3We use the structural interpretation of causal diagrams as described in [18, pp. 205] (see also Appendix 1). 2 Y X Z (a) (b) Y X (c) X Y Z1 Z2 (d) X Y Z1 Z2 1 X Y Z Z2 X Y Z1 Z2 (f) (e) Figure 1: (a) Selection diagram illustrating when transportability of R = P ∗(y|do(x)) between two domains is trivially solved through simple recalibration. (b) The smallest diagram in which a causal relation is not transportable. (c,d) Selection diagrams illustrating the impossibility of estimating R through individual transportability from πa and πb even when Z = {Z1, Z2}. If experiments over {Z2} is available in πa and over {Z1} in πb, R is transportable. (e,f) Selection diagrams illustrating opposite phenomenon – transportability through multiple domains is not feasible, but if Z = {Z1, Z2} in one domain is. The selection variables S are depicted as square nodes (■). P ∗(y|do(x)) about NYC. In this trivial example, the transport formula amounts to a simple recalibration (or re-weighting) of the age-specific effects to account for the new age distribution. In general, however, a more involved mixture of experimental and observational findings would be necessary to obtain an unbiased estimate of the target relation R. In certain cases there is no way to synthesize a transport formula, for instance, Fig. 1(b) depicts the smallest example in which transportability is not feasible (even with X randomized). Our goal is to characterize these cases. In real world applications, it may happen that only a limited amount of experimental information can be gathered at the source environment. The question arises whether an investigator in possession of a limited set of experiments would still be able to estimate the desired effects at the target domain. To illustrate some of the subtle issues that mz-transportability entails, consider Fig. 1(c,d) which concerns the transport of experimental results from two sources ({πa, πb}) to infer the effect of X on Y in π∗, R = P ∗(y|do(x)). In these diagrams, X may represent the treatment (e.g., cholesterol level), Z1 represents a pre-treatment variable (e.g., diet), Z2 represents an intermediate variable (e.g., biomarker), and Y represents the outcome (e.g., heart failure). Assume that experimental studies randomizing {Z2} can be conducted in domain πa and {Z1} in domain πb. A simple analysis can show that R cannot be transported from either source alone (even when experiments are available over both variables) [9]. Still, combining experiments from both sources allows one to determine the effect in the target through the following transport formula [13]: P ∗(y|do(x)) = X z2 P (b)(z2|x, do(Z1))P (a)(y|do(z2)) (2) This transport formula is a mixture of the experimental result over {Z1} from πb, P (b)(z2|x, do(Z1)), with the result of the experiment over {Z2} in πa, P (a)(y|do(z2)), and constitute a consistent estimand of the target relation in π∗. Further consider Fig. 1(e,f) which illustrates the opposite phenomenon. In this case, if experiments over {Z2} are available in domain πa and over {Z1} in πb, R is not transportable. However, if {Z1, Z2} are available in the same domain, say πa, R is transportable and equals P (a)(y|x, do(Z1, Z2)), independently of the values of Z1 and Z2. These intriguing results entail two fundamental issues that will be answered throughout this paper. First, whether the do-calculus is complete relative to such problems, that is, whether it would always find a transport formula whenever such exists. Second, assuming that there exists a sequence of applications of do-calculus that achieves the reduction required by mz-transportability, to find such a sequence may be computational intractable, so an efficient way is needed for obtaining such formula. 3 A Graphical Condition for mz-transportability The basic semantical framework in our analysis rests on structural causal models as defined in [18, pp. 205], also called data-generating models. In the structural causal framework [18, Ch. 7], actions are modifications of functional relationships, and each action do(x) on a causal model M produces a new model Mx = ⟨U, V, Fx, P(U)⟩, where V is the set of observable variables, U is the set of unobservable variables, and Fx is obtained after replacing fX ∈F for every X ∈X with a new function that outputs a constant value x given by do(x). We follow the conventions given in [18]. We denote variables by capital letters and their realized values by small letters. Similarly, sets of variables will be denoted by bold capital letters, sets 3 of realized values by bold small letters. We use the typical graph-theoretic terminology with the corresponding abbreviations De(Y)G, Pa(Y)G, and An(Y)G, which will denote respectively the set of observable descendants, parents, and ancestors of the node set Y in G. A graph GY will denote the induced subgraph G containing nodes in Y and all arrows between such nodes. Finally, GXZ stands for the edge subgraph of G where all arrows incoming into X and all arrows outgoing from Z are removed. Key to the analysis of transportability is the notion of identifiability [18, pp. 77], which expresses the requirement that causal effects are computable from a combination of non-experimental data P and assumptions embodied in a causal diagram G. Causal models and their induced diagrams are associated with one particular domain (i.e., setting, population, environment), and this representation is extended in transportability to capture properties of two domains simultaneously. This is possible if we assume that the structural equations share the same set of arguments, though the functional forms of the equations may vary arbitrarily [7]. 4 Definition 1 (Selection Diagrams). Let ⟨M, M ∗⟩be a pair of structural causal models relative to domains ⟨π, π∗⟩, sharing a diagram G. ⟨M, M ∗⟩is said to induce a selection diagram D if D is constructed as follows: every edge in G is also an edge in D; D contains an extra edge Si →Vi whenever there might exist a discrepancy fi ̸= f ∗ i or P(Ui) ̸= P ∗(Ui) between M and M ∗. In words, the S-variables locate the mechanisms where structural discrepancies between the two domains are suspected to take place.5 Armed with the concept of identifiability and selection diagrams, mz-transportability of causal effects can be defined as follows [13]: Definition 2 (mz-Transportability). Let D = {D(1), ..., D(n)} be a collection of selection diagrams relative to source domains Π = {π1, ..., πn}, and target domain π∗, respectively, and Zi (and Z∗) be the variables in which experiments can be conducted in domain πi (and π∗). Let ⟨P i, Ii z⟩be the pair of observational and interventional distributions of πi, where Ii z = S Z′⊆Zi P i(v|do(z′)), and in an analogous manner, ⟨P ∗, I∗ z ⟩be the observational and interventional distributions of π∗. The causal effect R = P ∗ x(y) is said to be mz-transportable from Π to π∗in D if P ∗ x(y) is uniquely computable from S i=1,...,n⟨P i, Ii z⟩∪⟨P ∗, I∗ z ⟩in any model that induces D. While this definition might appear convoluted, it is nothing more than a formalization of the statement “R need to be uniquely computable from the information set IS alone.” Naturally, when IS has many components (multiple observational and interventional distributions), it becomes lengthy. This requirement of computability from ⟨P ∗, I∗ z ⟩and ⟨P i, Ii z⟩from all sources has a syntactic image in the do-calculus, which is captured by the following sufficient condition: Theorem 1 ([13]). Let D = {D(1), ..., D(n)} be a collection of selection diagrams relative to source domains Π = {π1, ..., πn}, and target domain π∗, respectively, and Si represents the collection of S-variables in the selection diagram D(i). Let {⟨P i, Ii z⟩} and ⟨P ∗, I∗ z ⟩be respectively the pairs of observational and interventional distributions in the sources Π and target π∗. The effect R = P ∗(y|do(x)) is mz-transportable from Π to π∗in D if the expression P(y|do(x), S1, ..., Sn) is reducible, using the rules of the do-calculus, to an expression in which (1) do-operators that apply to subsets of Ii z have no Si-variables or (2) do-operators apply only to subsets of I∗ z . It is not difficult to see that in Fig. 1(c,d) (and also in Fig. 1(e,f)) a sequence of applications of the rules of do-calculus indeed reaches the reduction required by the theorem and yields a transport formula as shown in Section 2. It is not obvious, however, whether such sequence exists in Fig. 2(a,b) when experiments over {X} are available in πa and {Z} in πb, and if it does not exist, it is also not clear whether this would imply the inability to transport. It turns out that in this specific example there is not such sequence and the target relation R is not transportable, which means that there exist two models that are equally compatible with the data (i.e., both could generate the same dataset) while each model entails a different answer for the effect R (violating the uniqueness requirement of Def. 2). 6 To demonstrate this fact formally, we show the existence of two structural 4As discussed in the reference, the assumption of no structural changes between domains can be relaxed, but some structural assumptions regarding the discrepancies between domains must still hold (e.g., acyclicity). 5Transportability assumes that enough structural knowledge about both domains is known in order to substantiate the production of their respective causal diagrams. In the absence of such knowledge, causal discovery algorithms might be used to infer the diagrams from data [19, 18]. 6This is usually an indication that the current state of scientific knowledge about the problem (encoded in the form of a selection diagram) does not constraint the observed distributions in such a way that an answer is entailed independently of the details of the functions and probability over the exogenous. 4 W U X Z Y (d) W Y X Z (c) X Y (b) Z (a) X Z Y Figure 2: (a,b) Selection diagrams in which is not possible to transport R = P ∗(y|do(x)) with experiments over {X} in πa and {Z} in πb. (c,d) Example of diagrams in which some paths need to be extended for satisfying the definition of mz∗-shedge. models M1 and M2 such that the following equalities and inequality between distributions hold, P (a) M1(X, Z, Y ) = P (a) M2(X, Z, Y ), P (b) M1(X, Z, Y ) = P (b) M2(X, Z, Y ), P (a) M1(Z, Y |do(X)) = P (a) M2(Z, Y |do(X)), P (b) M1(X, Y |do(Z)) = P (b) M2(X, Y |do(Z)), P ∗ M1(X, Z, Y ) = P ∗ M2(X, Z, Y ), (3) for all values of X, Z, and Y , and P ∗ M1(Y |do(X)) ̸= P ∗ M2(Y |do(X)), (4) for some value of X and Y . Let us assume that all variables in U ∪V are binary. Let U1, U2 ∈U be the common causes of X and Y and Z and Y , respectively; let U3, U4 ∈U be the random disturbances exclusive to Z and Y , respectively, and U5, U6 ∈U be extra random disturbances exclusive to Y . Let Sa and Sb index the model in the following way: the tuples ⟨Sa = 1, Sb = 0⟩, ⟨Sa = 0, Sb = 1⟩, ⟨Sa = 0, Sb = 0⟩ represent domains πa, πb, and π∗, respectively. Define the two models as follows: M1 = X = U1 Z = U2 ⊕(U3 ∧Sa) Y = ((X ⊕Z ⊕U1 ⊕U2 ⊕(U4 ∧Sb)) ∧U5) + (¬U5 ∧U6) M2 = X = U1 Z = U2 ⊕(U3 ∧Sa) Y = ((Z ⊕U2 ⊕(U4 ∧Sb)) ∧U5) ⊕(¬U5 ∧U6) where ⊕represents the exclusive or function. Both models agree in respect to P(U), which is defined as P(Ui) = 1/2, i = 1, ..., 6. It is not difficult to evaluate these models and note that the constraints given in Eqs. (3) and (4) are indeed satisfied (including positivity), the result follows. 7 Given that our goal is to demonstrate the converse of Theorem 1, we collect different examples of non-transportability, as the previous one, and try to make sense whether there is a pattern in such cases and how to generalize them towards a complete characterization of mz-transportability. One syntactic subtask of mz-transportability is to determine whether certain effects are identifiable in some source domains where interventional data is available. There are two fundamental results developed for identifiability that will be relevant for mz-transportability as well. First, we should consider confounded components (or c-components), which were defined in [20] and stand for a cluster of variables connected through bidirected edges (which are not separable through the observables in the system). One key result is that each causal graph (and subgraphs) induces an unique C-component decomposition ([20, Lemma 11]). This decomposition was indeed instrumental for a series of conditions for ordinary identification [21] and the inability to recursively decompose a certain graph was later used to prove completeness. Definition 3 (C-component). Let G be a causal diagram such that a subset of its bidirected arcs forms a spanning tree over all vertices in G. Then G is a C-component (confounded component). Subsequently, [22] proposed an extension of C-components called C-forests, essentially enforcing that each C-component has to be a spanning forest and closed under ancestral relations [20]. 7To a more sophisticated argument on how to evaluate these models, see proofs in appendix 3. 5 Definition 4 (C-forest). Let G be a causal diagram where Y is the maximal root set. Then G is a Y-rooted C-forest if G is a C-component and all observable nodes have at most one child. For concreteness, consider Fig. 1(c) and note that there exists a C-forest over nodes {Z1, X, Z2} and rooted in {Z2}. There exists another C-forest over nodes {Z1, X, Z2, Y } rooted in {Y }. It is also the case that {Z2} and {Y } are themselves trivial C-forests. When we have a pair of C-forests as {Z1, X, Z2} and {Z2} or {Z1, X, Z2, Y } and {Y } – i.e., the root set does not intersect the treatment variables; these structures are called hedges and identifiability was shown to be infeasible whenever a hedge exists [22]. Clearly, despite the existence of hedges in Fig. 1(c,d), the effects of interest were shown to be mz-transportable. This example is an indication that hedges do not capture in an immediate way the structure needed for characterizing mz-transportability – i.e., a graph might be a hedge (or have a hedge as an edge sub–graph) but the target quantity might still be mz-transportable. Based on these observations, we propose the following definition that may lead to the boundaries of the class of mz-transportable relations: Definition 5 (mz∗-shedge). Let D = (D(1), . . . , D(n)) be a collection of selection diagrams relative to source domains Π = (π1, . . . , πn) and target domain π∗, respectively, Si represents the collection of S-variables in the selection diagram D(i), and let D(∗) be the causal diagram of π∗. Let {⟨P i, Ii z⟩} be the collection of pairs of observational and interventional distributions of {πi}, where Ii z = S Z′⊆Zi P i(v|do(z′)), and in an analogous manner, ⟨P ∗, I∗ z ⟩be the observational and interventional distributions of π∗, for Zi the set of experimental variables in πi. Consider a pair of R-rooted C-forests F = ⟨F, F ′⟩such that F ′ ⊂F, F ′ ∩X = ∅, F ∩X ̸= ∅, and R ⊆An(Y)GX (called a hedge [22]). We say that the induced collection of pairs of R-rooted C-forests over each diagram, ⟨F(∗), F(1), ..., F(n)⟩, is an mz-shedge for P ∗ x(y) relative to experiments (I∗ z , I1 z, ..., In z ) if they are all hedges and one of the following conditions hold for each domain πi, i = {∗, 1, ..., n}: 1. There exists at least one variable of Si pointing to the induced diagram F ′(i), or 2. (F (i) \ F ′(i)) ∩Zi is an empty set, or 3. The collection of pairs of C-forests induced over diagrams, ⟨F(∗), F(1), . . . , F (i) \ Z∗ i , . . . , F(n)⟩, is also an mz-shedge relative to (I∗ z , I1 z, ..., Ii z\z∗ i , ..., In z ), where Z∗ i = (F (i) \ F ′(i)) ∩Zi. Furthermore, we call mz∗-shedge the mz-shedge in which there exist one directed path from R \ (R ∩De(X)F ) to (R ∩De(X)F ) not passing through X (see also appendix 3). The definition of mz∗-shedge might appear involved, but it is nothing more than the articulation of the computability requirement of Def. 2 (and implicitly the syntactic goal of Thm. 1) in a more explicit graphical fashion. Specifically, for a certain factor Q∗ i needed for the computation of the effect Q∗= P ∗(y|do(x)), in at least one domain, (i) it should be enforced that the S-nodes are separable from the inducing root set of the component in which Q∗ i belongs, and further, (ii) the experiments available in this domain are sufficient for solving Q∗ i . For instance, assuming we want to compute Q∗= P ∗(y|do(x)) in Fig. 1(c, d), Q∗can be decomposed into two factors, Q∗ 1 = P ∗ z1,x(z2) and Q∗ 2 = P ∗ z1,x,z2(y). It is the case that for factor Q∗ 1, (i) holds true in πb and (ii) the experiments available over Z1 are enough to guarantee the computability of this factor (similar analysis applies to Q∗ 2) – i.e., there is no mz∗-shedge and Q∗is computable from the available data. Def. 5 also asks for the explicit existence of a path from the nodes in the root set R\(R∩De(X)F ) to (R ∩De(X)F ), a simple example can help to illustrate this requirement. Consider Fig. 2(c) and the goal of computing Q = P ∗(y|do(x)) without extra experimental information. There exists a hedge for Q induced over {X, Z, Y } without the node W (note that {W} is a c-component itself) and the induced graph G{X,Z,Y } indeed leads to a counter-example for the computability of P ∗(z, y|do(x)). Using this subgraph alone, however, it would not be possible to construct a counterexample for the marginal effect P ∗(y|do(x)). Despite the fact that P ∗(z, y|do(x)) is not computable from P ∗(x, z, y), the quantity P ∗(y|do(x)) is identifiable in G{X,Z,Y }, and so any structural model compatible with this subgraph will generate the same value under the marginalization over Z from P ∗(z, y|do(x)). Also, it might happen that the root set R must be augmented (Fig. 2(d)), so we prefer to add this requirement explicitly to the definition. (There are more involved scenarios that 6 PROCEDURE TRmz(y, x, P, I, S, W, D) INPUT: x, y: value assignments; P: local distribution relative to domain S (S = 0 indexes π∗) and active experiments I; W: weighting scheme; D: backbone of selection diagram; Si: selection nodes in πi (S0 = ∅ relative to π∗); [The following set and distributions are globally defined: Zi, P ∗, P (i) Zi .] OUTPUT: P ∗ x(y) in terms of P ∗, P ∗ Z, P (i) Zi or FAIL(D, C0). 1 if x = ∅, return P V\Y P. 2 if V \ An(Y)D ̸= ∅, return TRmz(y, x ∩An(Y)D, P V\An(Y)D P, I, S, W, DAn(Y)). 3 set W = (V \ X) \ An(Y)DX. if W ̸= ∅, return TRmz(y, x ∪w, P, I, S, W, D). 4 if C(D \ X) = {C0, C1, ..., Ck}, return P V\{Y,X} Q i TRmz(ci, v \ ci, P, I, S, W, D). 5 if C(D \ X) = {C0}, 6 if C(D) ̸= {D}, 7 if C0 ∈C(D), return Q i|Vi∈C0 P V\V (i) D P/ P V\V (i−1) D P. 8 if (∃C′)C0 ⊂C′ ∈C(D), for {i|Vi ∈C′}, set κi = κi ∪v(i−1) D \ C′. return TRmz(y, x ∩C′, Q i|Vi∈C′ P(Vi|V (i−1) D ∩C′, κi), I, S, W, C′). 9 else, 10 if I = ∅, for i = 0, ..., |D|, if ` (Si ⊥⊥Y | X)D(i) X ∧(Zi ∩X ̸= ∅) ´ , Ei = TRmz(y, x \ zi, P, Zi ∩X, i, W, D \ {Zi ∩X}). 11 if |E| > 0, return P|E| i=1 w(j) i Ei. 12 else, FAIL(D, C0). Figure 3: Modified version of identification algorithm capable of recognizing mz-transportability. we prefer to omit for the sake of presentation.) After adding the directed path from Z to Y that passes through W, we can construct the following counter-example for Q: M1 = X = U1 Z = U1 ⊕U2 W = ((Z ⊕U3) ∨B) ⊕(B ∧(1 ⊕Z)) Y = ((X ⊕W ⊕U2) ∧A) ⊕(A ∨(1 ⊕X ⊕W ⊕U2)), M2 = X = U1 Z = U2 W = ((Z ⊕U3) ∨B) ⊕(B ∧(1 ⊕Z)) Y = ((W ⊕U2) ∧A) ⊕(A ∨(1 ⊕W ⊕U2)), with P(Ui) = 1/2, ∀i, P(A) = P(B) = 1/2. It is not immediate to show that the two models produce the desired property. Refer to Appendix 2 for a formal proof of this statement. Given that the definition of mz∗-shedge is justified and well-understood, we can now state the connection between hedges and mz∗-shedges more directly (the proof can be found in Appendix 3): Theorem 2. If there is a hedge for P ∗ x(y) in G and no experimental data is available (i.e., I∗ z = {}), there exists an mz∗-shedge for P ∗ x(y) in G. Whenever one domain is considered and no experimental data is available, this result states that a mz∗-shedge can always be constructed from a hedge, which implies that we can operate with mz∗shedges from now on (the converse holds for Z = {}). Finally, we can concentrate on the most general case of mz∗-shedges with experimental data in multiple domains as stated in the sequel: Theorem 3. Let D = {D(1), ..., D(n)} be a collection of selection diagrams relative to source domains Π = {π1, ..., πn}, and target domain π∗, respectively, and {Ii z}, for i = {∗, 1, ..., n} defined appropriately. If there is an mz∗-shedge for the effect R = P ∗ x(y) relative to experiments (I∗ z , I1 z, ..., In z ) in D, R is not mz-transportable from Π to π∗in D. This is a powerful result that states that the existence of a mz∗-shedge precludes mz-transportability. (The proof of this statement is somewhat involved, see the supplementary material for more details.) For concreteness, let us consider the selection diagrams D = (D(a), D(b)) relative to domains πa and πb in Fig. 2(a,b). Our goal is to mz-transport Q = P ∗(y|do(x)) with experiments over {X} in πa and {Z} in πb. It is the case that there exists an mz∗-shedge relative to the given experiments. To witness, first note that F ′ = {Y, Z} and F = F ′ ∪{X}, and also that there exists a selection variable S pointing to F ′ in both domains – the first condition of Def. 5 is satisfied. This is a trivial graph with 3 variables that can be solved by inspection, but it is somewhat involved to efficiently evaluate the conditions of the definition in more intricate structures, which motivates the search for a procedure for recognizing mz∗-shedges that can be coupled with the previous theorem. 7 4 Complete Algorithm for mz-transportability There exists an extensive literature concerned with the problem of computability of causal relations from a combination of assumptions and data [21, 22, 7, 13]. In this section, we build on the works that treat this problem by graphical means, and we concentrate particularly in the algorithm called TRmz constructed in [13] (see Fig. 3) that followed some of the results in [21, 22, 7]. The algorithm TRmz takes as input a collection of selection diagrams with the corresponding experimental data from the corresponding domains, and it returns a transport formula whenever it is able to produce one. The main idea of the algorithm is to leverage the c-component factorization [20] and recursively decompose the target relation into manageable pieces (line 4), so as to try to solve each of them separately. Whenever this standard evaluation fails in the target domain π∗(line 6), TRmz tries to use the experimental information available from the target and source domains (line 10). (For a concrete view of how TRmz works, see the running example in [13, pp. 7]. ) In a systematic fashion, the algorithm basically implements the declarative condition delineated in Theorem 1. TRmz was shown to be sound [13, Thm. 3], but there is no theoretical guarantee on whether failure in finding a transport formula implies its non-existence and perhaps, the complete lack of transportability. This guarantee is precisely what we state in the sequel. Theorem 4. Assume TRmz fails to transport the effect P ∗ x(y) (exits with failure executing line 12). Then there exists X′ ⊆X, Y′ ⊆Y, such that the graph pair D, C0 returned by the fail condition of TRmz contains as edge subgraphs C-forests F, F’ that span a mz∗-shedge for P ∗ x′(y′). Proof. Let D be the subgraph local to the call in which TRmz failed, and R be the root set of D. It is possible to remove some directed arrows from D while preserving R as root, which result in a Rrooted c-forest F. Since by construction F ′ = F ∩C0 is closed under descendents and only directed arrows were removed, both F, F ′ are C-forests. Also by construction R ⊂An(Y)GX together with the fact that X and Y from the recursive call are clearly subsets of the original input. Before failure, TRmz evaluated false consecutively at lines 6, 10, and 11, and it is not difficult to see that an S-node points to F ′ or the respective experiments were not able to break the local hedge (lines 10 and 11). It remains to be showed that this mz-shedge can be stretched to generate a mz∗-shedge, but now the same construction given in Thm. 2 can be applied (see also supplementary material). Finally, we are ready to state the completeness of the algorithm and the graphical condition. Theorem 5 (completeness). TRmz is complete. Corollary 1 (mz∗-shedge characterization). P ∗ x(y) is mz-transportable from Π to π∗in D if and only if there is not mz∗-shedge for Px′(y′) in D for any X′ ⊆X and Y′ ⊆Y. Furthermore, we show below that the do-calculus is complete for establishing mz-transportability, which means that failure in the exhaustive application of its rules implies the non-existence of a mapping from the available data to the target relation (i.e., there is no mz-transport formula), independently of the method used to obtain such mapping. Corollary 2 (do-calculus characterization). The rules of do-calculus together with standard probability manipulations are complete for establishing mz-transportability of causal effects. 5 Conclusions In this paper, we provided a complete characterization in the form of a graphical condition for deciding mz-transportability. We further showed that the procedure introduced in [1] for computing the transport formula is complete, which means that the set of transportable instances identified by the algorithm cannot be broadened without strengthening the assumptions. Finally, we showed that the do-calculus is complete for this class of problems, which means that finding a proof strategy in this language suffices to solve the problem. The non-parametric characterization established in this paper gives rise to a new set of research questions. While our analysis aimed at achieving unbiased transport under asymptotic conditions, additional considerations need to be taken into account when dealing with finite samples. Specifically, when sample sizes vary significantly across studies, statistical power considerations need to be invoked along with bias considerations. Furthermore, when no transport formula exists, approximation techniques must be resorted to, for example, replacing the requirement of non-parametric analysis with assumptions about linearity or monotonicity of certain relationships in the domains. The nonparametric characterization provided in this paper should serve as a guideline for such approximation schemes. 8 References [1] D. Campbell and J. Stanley. Experimental and Quasi-Experimental Designs for Research. Wadsworth Publishing, Chicago, 1963. [2] C. Manski. Identification for Prediction and Decision. Harvard University Press, Cambridge, Massachusetts, 2007. [3] L. V. Hedges and I. Olkin. Statistical Methods for Meta-Analysis. Academic Press, January 1985. [4] W.R. Shadish, T.D. Cook, and D.T. Campbell. Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Houghton-Mifflin, Boston, second edition, 2002. [5] S. Morgan and C. Winship. Counterfactuals and Causal Inference: Methods and Principles for Social Research (Analytical Methods for Social Research). Cambridge University Press, New York, NY, 2007. [6] J. Pearl and E. Bareinboim. Transportability of causal and statistical relations: A formal approach. In W. Burgard and D. Roth, editors, Proceedings of the Twenty-Fifth National Conference on Artificial Intelligence, pages 247–254. AAAI Press, Menlo Park, CA, 2011. [7] E. Bareinboim and J. Pearl. Transportability of causal effects: Completeness results. In J. Hoffmann and B. Selman, editors, Proceedings of the Twenty-Sixth National Conference on Artificial Intelligence, pages 698–704. AAAI Press, Menlo Park, CA, 2012. [8] E. Bareinboim and J. Pearl. A general algorithm for deciding transportability of experimental results. Journal of Causal Inference, 1(1):107–134, 2013. [9] E. Bareinboim and J. Pearl. Causal transportability with limited experiments. In M. desJardins and M. Littman, editors, Proceedings of the Twenty-Seventh National Conference on Artificial Intelligence, pages 95–101, Menlo Park, CA, 2013. AAAI Press. [10] S. Lee and V. Honavar. Causal transportability of experiments on controllable subsets of variables: ztransportability. In A. Nicholson and P. Smyth, editors, Proceedings of the Twenty-Ninth Conference on Uncertainty in Artificial Intelligence (UAI), pages 361–370. AUAI Press, 2013. [11] E. Bareinboim and J. Pearl. Meta-transportability of causal effects: A formal approach. In C. Carvalho and P. Ravikumar, editors, Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics (AISTATS), pages 135–143. JMLR W&CP 31, 2013. [12] S. Lee and V. Honavar. m-transportability: Transportability of a causal effect from multiple environments. In M. desJardins and M. Littman, editors, Proceedings of the Twenty-Seventh National Conference on Artificial Intelligence, pages 583–590, Menlo Park, CA, 2013. AAAI Press. [13] E. Bareinboim, S. Lee, V. Honavar, and J. Pearl. Transportability from multiple environments with limited experiments. In C.J.C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 136–144. Curran Associates, Inc., 2013. [14] H. Daume III and D. Marcu. Domain adaptation for statistical classifiers. Journal of Artificial Intelligence Research, 26:101–126, 2006. [15] A.J. Storkey. When training and test sets are different: characterising learning transfer. In J. Candela, M. Sugiyama, A. Schwaighofer, and N.D. Lawrence, editors, Dataset Shift in Machine Learning, pages 3–28. MIT Press, Cambridge, MA, 2009. [16] B. Sch¨olkopf, D. Janzing, J. Peters, E. Sgouritsa, K. Zhang, and J. Mooij. On causal and anticausal learning. In J Langford and J Pineau, editors, Proceedings of the 29th International Conference on Machine Learning (ICML), pages 1255–1262, New York, NY, USA, 2012. Omnipress. [17] K. Zhang, B. Sch¨olkopf, K. Muandet, and Z. Wang. Domain adaptation under target and conditional shift. In Proceedings of the 30th International Conference on Machine Learning (ICML). JMLR: W&CP volume 28, 2013. [18] J. Pearl. Causality: Models, Reasoning, and Inference. Cambridge University Press, New York, 2000. 2nd edition, 2009. [19] P. Spirtes, C.N. Glymour, and R. Scheines. Causation, Prediction, and Search. MIT Press, Cambridge, MA, 2nd edition, 2000. [20] J. Tian. Studies in Causal Reasoning and Learning. PhD thesis, Department of Computer Science, University of California, Los Angeles, Los Angeles, CA, November 2002. [21] J. Tian and J. Pearl. A general identification condition for causal effects. In Proceedings of the Eighteenth National Conference on Artificial Intelligence, pages 567–573. AAAI Press/The MIT Press, Menlo Park, CA, 2002. [22] I. Shpitser and J. Pearl. Identification of joint interventional distributions in recursive semi-Markovian causal models. In Proceedings of the Twenty-First National Conference on Artificial Intelligence, pages 1219–1226. AAAI Press, Menlo Park, CA, 2006. 9
|
2014
|
160
|
5,248
|
Fast and Robust Least Squares Estimation in Corrupted Linear Models Brian McWilliams⇤ Gabriel Krummenacher⇤ Mario Lucic Joachim M. Buhmann Department of Computer Science ETH Z¨urich, Switzerland {mcbrian,gabriel.krummenacher,lucic,jbuhmann}@inf.ethz.ch Abstract Subsampling methods have been recently proposed to speed up least squares estimation in large scale settings. However, these algorithms are typically not robust to outliers or corruptions in the observed covariates. The concept of influence that was developed for regression diagnostics can be used to detect such corrupted observations as shown in this paper. This property of influence – for which we also develop a randomized approximation – motivates our proposed subsampling algorithm for large scale corrupted linear regression which limits the influence of data points since highly influential points contribute most to the residual error. Under a general model of corrupted observations, we show theoretically and empirically on a variety of simulated and real datasets that our algorithm improves over the current state-of-the-art approximation schemes for ordinary least squares. 1 Introduction To improve scalability of the widely used ordinary least squares algorithm, a number of randomized approximation algorithms have recently been proposed. These methods, based on subsampling the dataset, reduce the computational time from O ! np2" to o(np2)1 [14]. Most of these algorithms are concerned with the classical fixed design setting or the case where the data is assumed to be sampled i.i.d. typically from a sub-Gaussian distribution [7]. This is known to be an unrealistic modelling assumption since real-world data are rarely well-behaved in the sense of the underlying distributions. We relax this limiting assumption by considering the setting where with some probability, the observed covariates are corrupted with additive noise. This scenario corresponds to a generalised version of the classical problem of “errors-in-variables” in regression analysis which has recently been considered in the context of sparse estimation [12]. This corrupted observation model poses a more realistic model of real data which may be subject to many different sources of measurement noise or heterogeneity in the dataset. A key consideration for sampling is to ensure that the points used for estimation are typical of the full dataset. Typicality requires the sampling distribution to be robust against outliers and corrupted points. In the i.i.d. sub-Gaussian setting, outliers are rare and can often easily be identified by examining the statistical leverage scores of the datapoints. Crucially, in the corrupted observation setting described in §2, the concept of an outlying point concerns the relationship between the observed predictors and the response. Now, leverage alone cannot detect the presence of corruptions. Consequently, without using additional knowledge about ⇤Authors contributed equally. 1Informally: f(n) = o(g(n)) means f(n) grows more slowly than g(n). 1 the corrupted points, the OLS estimator (and its subsampled approximations) are biased. This also rules out stochastic gradient descent (SGD) – which is often used for large scale regression – since convex cost functions and regularizers which are typically used for noisy data are not robust with respect to measurement corruptions. This setting motivates our use of influence – the effective impact of an individual datapoint exerts on the overall estimate – in order to detect and therefore avoid sampling corrupted points. We propose an algorithm which is robust to corrupted observations and exhibits reduced bias compared with other subsampling estimators. Outline and Contributions. In §2 we introduce our corrupted observation model before reviewing the basic concepts of statistical leverage and influence in §3. In §4 we briefly review two subsampling approaches to approximating least squares based on structured random projections and leverage weighted importance sampling. Based on these ideas we present influence weighted subsampling (IWS-LS), a novel randomized least squares algorithm based on subsampling points with small influence in §5. In §6 we analyse IWS-LS in the general setting where the observed predictors can be corrupted with additive sub-Gaussian noise. Comparing the IWS-LS estimate with that of OLS and other randomized least squares approaches we show a reduction in both bias and variance. It is important to note that the simultaneous reduction in bias and variance is relative to OLS and randomized approximations which are only unbiased in the non-corrupted setting. Our results rely on novel finite sample characteristics of leverage and influence which we defer to §SI.3. Additionally, in §SI.4 we prove an estimation error bound for IWS-LS in the standard sub-Gaussian model. Computing influence exactly is not practical in large-scale applications and so we propose two randomized approximation algorithms based on the randomized leverage approximation of [8]. Both of these algorithms run in o(np2) time which improve scalability in large problems. Finally, in §7 we present extensive experimental evaluation which compares the performance of our algorithms against several randomized least squares methods on a variety of simulated and real datasets. 2 Statistical model In this work we consider a variant of the standard linear model y = Xβ + ✏, (1) where ✏2 Rn is a noise term independent of X 2 Rn⇥p. However, rather than directly observing X we instead observe Z where Z = X + UW. (2) U = diag(u1, . . . , un) and ui is a Bernoulli random variable with probability ⇡of being 1. W 2 Rn⇥p is a matrix of measurement corruptions. The rows of Z therefore are corrupted with probability ⇡and not corrupted with probability (1 −⇡). Definition 1 (Sub-gaussian matrix). A zero-mean matrix X is called sub-Gaussian with parameter ( 1 nσ2 x, 1 n⌃x) if (a) Each row x> i 2 Rp is sampled independently and has E[xix> i ] = 1 n⌃x. (b) For any unit vector v 2 Rp, v>xi is a sub-Gaussian random variable with parameter at most 1 ppσx. We consider the specific instance of the linear corrupted observation model in Eqs. (1), (2) where • X, W 2 Rn⇥p are sub-Gaussian with parameters ( 1 nσ2 x, 1 n⌃x) and ( 1 nσ2 w, 1 n⌃w) respectively, • ✏2 Rn is sub-Gaussian with parameters ( 1 nσ2 ✏, 1 nσ2 ✏In), and all are independent of each other. The key challenge is that even when ⇡and the magnitude of the corruptions, σw are relatively small, the standard linear regression estimate is biased and can perform poorly (see §6). Sampling methods which are not sensitive to corruptions in the observations can perform even worse if they somehow subsample a proportion rn > ⇡n of corrupted points. Furthermore, the corruptions may not be large enough to be detected via leverage based techniques alone. The model described in this section generalises the “errors-in-variables” model from classical least squares modelling. Recently, similar models have been studied in the high dimensional (p ≫n) 2 setting in [4–6, 12] in the context of robust sparse estimation. The “low-dimensional” (n > p) setting is investigated in [4], but the “big data” setting (n ≫p) has not been considered so far.2 In the high-dimensional problem, knowledge of the corruption covariance, ⌃w [12], or the data covariance ⌃x [5], is required to obtain a consistent estimate. This assumption may be unrealistic in many settings. We aim to reduce the bias in our estimates without requiring knowledge of the true covariance of the data or the corruptions, and instead sub-sample only non-corrupted points. 3 Diagnostics for linear regression In practice, the sub-Gaussian linear model assumption is often violated either by heterogeneous noise or by a corruption model as in §2. In such scenarios, fitting a least squares model to the full dataset is unwise since the outlying or corrupted points can have a large adverse effect on the model fit. Regression diagnostics have been developed in the statistics literature to detect such points (see e.g. [2] for a comprehensive overview). Recently, [14] proposed subsampling points for least squares based on their leverage scores. Other recent works suggest related influence measures that identify subspace [16] and multi-view [15] clusters in high dimensional data. 3.1 Statistical leverage For the standard linear model in Eq. (1), the well known least squares solution is bβ = arg min β ky −Xβk2 = ! X>X "−1 X>y. (3) The projection matrix I−L with L := X(X>X)−1X> specifies the subspace in which the residual lies. The diagonal elements of the “hat matrix” L, li := Lii, i = 1, . . . , n are the statistical leverage scores of the ith sample. Leverage scores quantify to what extent a particular sample is an outlier with respect to the distribution of X. An equivalent definition from [14] which will be useful later concerns any matrix U 2 Rn⇥p which spans the column space of X (for example, the matrix whose columns are the left singular vectors of X). The statistical leverage scores of the rows of X are the squared row norms of U, i.e. li = kUik2. Although the use of leverage can be motivated from the least squares solution in Eq. (3), the leverage scores do not take into account the relationship between the predictor variables and the response variable y. Therefore, low-leverage points may have a weak predictive relationship with the response and vice-versa. In other words, it is possible for such points to be outliers with respect to the conditional distribution P(y|X) but not the marginal distribution on X. 3.2 Influence A concept that captures the predictive relationship between covariates and response is influence. Influential points are those that might not be outliers in the geometric sense, but instead adversely affect the estimated coefficients. One way to assess the influence of a point is to compute the change in the learned model when the point is removed from the estimation step. [2]. We can compute a leave-one-out least squares estimator by straightforward application of the Sherman-Morrison-Woodbury formula (see Prop. 3 in §SI.3): bβ−i = ! X>X −x> i xi "−1 ! X>y −x> i yi " = bβ −⌃−1x> i ei 1 −li where ei = yi −xibβOLS. Defining the influence3, di as the change in expected mean squared error we have di = ⇣ bβ −bβ−i ⌘> X>X ⇣ bβ −bβ−i ⌘ = e2 i li (1 −li)2 . 2Unlike [5,12] and others we do not consider sparsity in our solution since n ≫p. 3The expression we use is also called Cook’s distance [2]. 3 Points with large values of di are those which, if added to the model, have the largest adverse effect on the resulting estimate. Since influence only depends on the OLS residual error and the leverage scores, it can be seen that the influence of every point can be computed at the cost of a least squares fit. In the next section we will see how to approximate both quantities using random projections. 4 Fast randomized least squares algorithms We briefly review two randomized approaches to least squares approximation: the importance weighted subsampling approach of [9] and the dimensionality reduction approach [14]. The former proposes an importance sampling probability distribution according to which, a small number of rows of X and y are drawn and used to compute the regression coefficients. If the sampling probabilities are proportional to the statistical leverages, the resulting estimator is close to the optimal estimator [9]. We refer to this as LEV-LS. The dimensionality reduction approach can be viewed as a random projection step followed by a uniform subsampling. The class of Johnson-Lindenstrauss projections – e.g. the SRHT – has been shown to approximately uniformize leverage scores in the projected space. Uniformly subsampling the rows of the projected matrix proves to be equivalent to leverage weighted sampling on the original dataset [14]. We refer to this as SRHT-LS. It is analysed in the statistical setting by [7] who also propose ULURU, a two step fitting procedure which aims to correct for the subsampling bias and consequently converges to the OLS estimate at a rate independent of the number of subsamples [7]. Subsampled Randomized Hadamard Transform (SRHT) The SHRT consists of a preconditioning step after which nsubs rows of the new matrix are subsampled uniformly at random in the following way q n nsubs SHD · X = ⇧X with the definitions [3]: • S is a subsampling matrix. • D is a diagonal matrix whose entries are drawn independently from {−1, 1}. • H 2 Rn⇥n is a normalized Walsh-Hadamard matrix4 which is defined recursively as Hn = Hn/2 Hn/2 Hn/2 −Hn/2 ( , H2 = +1 +1 +1 −1 ( . We set H = 1 pnHn so it has orthonormal columns. As a result, the rows of the transformed matrix ⇧X have approximately uniform leverage scores. (see [17] for detailed analysis of the SRHT). Due to the recursive nature of H, the cost of applying the SRHT is O (pn log nsubs) operations, where nsubs is the number of rows sampled from X [1]. The SRHT-LS algorithm solves bβSRHT = arg minβ k⇧y −⇧Xβk2 which for an appropriate subsampling ratio, r = ⌦( p2 ⇢2 ) results in a residual error, ˜e which satisfies k˜ek (1 + ⇢)kek (4) where e = y −XbβOLS is the vector of OLS residual errors [14]. Randomized leverage computation Recently, a method based on random projections has been proposed to approximate the leverage scores based on first reducing the dimensionality of the data using the SRHT followed by computing the leverage scores using this low-dimensional approximation [8–10,13]. The leverage approximation algorithm of [8] uses a SRHT, ⇧1 2 Rr1⇥n to first compute the approximate SVD of X, ⇧1X = U⇧X⌃⇧XV> ⇧X. Followed by a second SHRT ⇧2 2 Rp⇥r2 to compute an approximate orthogonal basis for X R−1 = V⇧X⌃−1 ⇧X 2 Rp⇥p, ˜U = XR−1⇧2 2 Rn⇥r2. (5) 4For the Hadamard transform, n must be a power of two but other transforms exist (e.g. DCT, DFT) for which similar theoretical guarantees hold and there is no restriction on n. 4 The approximate leverage scores are now the squared row norms of ˜U, ˜li = k ˜Uik2. From [14] we derive the following result relating to randomized approximation of the leverage ˜li (1 + ⇢l)li , (6) where the approximation error, ⇢l depends on the choice of projection dimensions r1 and r2. The leverage weighted least squares (LEV-LS) algorithm samples rows of X and y with probability proportional to li (or ˜li in the approximate case) and performs least squares on this subsample. The residual error resulting from the leverage weighted least squares is bounded by Eq. (4) implying that LEV-LS and SRHT-LS are equivalent [14]. It is important to note that under the corrupted observation model these approximations will be biased. 5 Influence weighted subsampling In the corrupted observation model, OLS and therefore the random approximations to OLS described in §4 obtain poor predictions. To remedy this, we propose influence weighted subsampling (IWS-LS) which is described in Algorithm 1. IWS-LS subsamples points according to the distribution, Pi = c/di where c is a normalizing constant so that Pn i=1 Pi = 1. OLS is then estimated on the subsampled points. The sampling procedure ensures that points with high influence are selected infrequently and so the resulting estimate is less biased than the full OLS solution. Several approaches similar in spirit have previously been proposed based on identifying and down-weighting the effect of highly influential observations [19]. Obviously, IWS-LS is impractical in the scenarios we consider since it requires the OLS residuals and full leverage scores. However, we use this as a baseline and to simplify the analysis. In the next section, we propose an approximate influence weighted subsampling algorithm which combines the approximate leverage computation of [8] and the randomized least squares approach of [14]. Algorithm 1 Influence weighted subsampling (IWS-LS). Input: Data: Z, y 1: Solve bβOLS = arg minβ ky −Zβk2 2: for i = 1 . . . n do 3: ei = yi −zibβOLS 4: li = z> i (Z>Z)−1zi 5: di = e2 i li/(1 −li)2 6: end for 7: Sample rows (˜Z, ˜y) of (Z, y) proportional to 1 di 8: Solve bβIWS = arg minβ k˜y −˜Zβk2 Output: bβIWS Algorithm 2 Residual weighted subsampling (aRWS-LS) Input: Data: Z, y 1: Solve bβSRHT = arg minβ k⇧· (y −Zβ)k2 2: Estimate residuals: ˜e = y −ZbβSRHT 3: Sample rows (˜Z, ˜y) of (Z, y) proportional to 1 ˜e2 i 4: Solve bβRW S = arg minβ k˜y −˜Zβk2 Output: bβRW S Randomized approximation algorithms. Using the ideas from §4 and §4 we obtain the following randomized approximation to the influence scores ˜di = ˜e2 i ˜li (1 −˜li)2 , (7) where ˜ei is the ith residual error computed using the SRHT-LS estimator. Since the approximation errors of ˜ei and ˜li are bounded (inequalities (4) and (6)), this suggests that our randomized approximation to influence is close to the true influence. Basic approximation. The first approximation algorithm is identical to Algorithm 1 except that leverage and residuals are replaced by their randomized approximations as in Eq. (7). We refer to this algorithm as Approximate influence weighted subsampling (aIWS-LS). Full details are given in Algorithm 3 in §SI.2. 5 Residual Weighted Sampling. Leverage scores are typically uniform [7, 13] for sub-Gaussian data. Even in the corrupted setting, the difference in leverage scores between corrupted and noncorrupted points is small (see §6). Therefore, the main contribution to the influence for each point will originate from the residual error, e2 i . Consequently, we propose sampling with probability inversely proportional to the approximate residual, 1 ˜e2 i . The resulting algorithm Residual Weighted Subsampling (aRWS-LS) is detailed in Algorithm 2. Although aRWS-LS is not guaranteed to be a good approximation to IWS-LS, empirical results suggests that it works well in practise and is faster to compute than aIWS-LS. Computational complexity. Clearly, the computational complexity of IWS-LS is O ! np2" . The computation complexity of aIWS-LS is O ! np log nsubs + npr2 + nsubsp2" , where the first term is the cost of SRHT-LS, the second term is the cost of approximate leverage computation and the last term solves OLS on the subsampled dataset. Here, r2 is the dimension of the random projection detailed in Eq. (5). The cost of aRWS-LS is O ! np log nsubs + np + nsubsp2" where the first term is the cost of SRHT-LS, the second term is the cost of computing the residuals e, and the last term solves OLS on the subsampled dataset. This computation can be reduced to O ! np log nsubs + nsubsp2" . Therefore the cost of both aIWS-LS and aRWS-LS is o(np2). 6 Estimation error In this section we will prove an upper bound on the estimation error of IWS-LS in the corrupted model. First, we show that the OLS error consists of two additional variance terms that depend on the size and proportion of the corruptions and an additional bias term. We then show that IWS-LS can significantly reduce the relative variance and bias in this setting, so that it no longer depends on the magnitude of the corruptions but only on their proportion. We compare these results to recent results from [4,12] suggesting that consistent estimation requires knowledge about ⌃w. More recently, [5] show that incomplete knowledge about this quantity results in a biased estimator where the bias is proportional to the uncertainty about ⌃w. We see that the form of our bound matches these results. Inequalities are said to hold with high probability (w.h.p.) if the probability of failure is not more than C1 exp(−C2 log p) where C1, C2 are positive constants that do not depend on the scaling quantities n, p, σw. The symbol . means that we ignore constants that do not depend on these scaling quantities. Proofs are provided in the supplement. Unless otherwise stated, k·k denotes the `2 norm for vectors and the spectral norm for matrices. Corrupted observation model. As a baseline, we first investigate the behaviour of the OLS estimator in the corrupted model. Theorem 1 (A bound on kbβOLS −βk). If n & σ2 xσ2 w λmin(⌃x)p log p then w.h.p. kbβOLS −βk . ! σ✏σx + ⇡σ✏σw + ⇡ ! σ2 w + σwσx " kβk " r p log p n + ⇡σ2 w ppkβk ! · 1 λ (8) where 0 < λ λmin(⌃x) + ⇡λmin(⌃w). Remark 1 (No corruptions case). Notice for a fixed σw, taking lim⇡!0 or for a fixed ⇡taking limσw!0 (i.e. there are no corruptions) the above error reduces to the least squares result (see for example [4]). Remark 2 (Variance and Bias). The first three terms in (8) scale with p 1/n so as n ! 1, these terms tend towards 0. The last term does not depend on p 1/n and so for some non-zero ⇡the least squares estimate will incur some bias depending on the fraction and magnitude of corruptions. We are now ready to state our theorem characterising the mean squared error of the influence weighted subsampling estimator. Theorem 2 (Influence sampling in the corrupted model). For n & σ2 xσ2 w λmin(⌃⇥x)p log p we have kbβIWS −βk . ✓ σ✏σx + ⇡σ✏ (σw + 1) + ⇡kβk ◆r p log p nsubs + ⇡ppkβk ! . 1 λ where 0 < λ λmin(⌃⇥x) and ⌃⇥x is the covariance of the influence weighted subsampled data. 6 (a) Influence (1.1) (b) Leverage (0.1) Figure 1: Comparison of the distribution of the influence and leverage for corrupted and noncorrupted points. The `1 distance between the histograms is shown in brackets. Remark 3. Theorem 2 states that the influence weighted subsampling estimator removes the proportional dependance of the error on σw so the additional variance terms scale as O(⇡/σw· p p/nsubs) and O(⇡ p p/nsubs). The relative contribution of the bias term is ⇡ppkβk compared with ⇡σ2 w ppkβk for the OLS or non-influence-based subsampling methods. Comparison with fully corrupted setting. We note that the bound in Theorem 1 is similar to the bound in [5] for an estimator where all data points are corrupted (i.e. ⇡= 1) and where incomplete knowledge of the covariance matrix of the corruptions, ⌃w is used. The additional bias in the estimator is proportional to the uncertainty in the estimate of ⌃w – in Theorem 1 this corresponds to σ2 w. Unbiased estimation is possible if ⌃w is known. See the Supplementary Information for further discussion, where the relevant results from [5] are provided in Section SI.6.1 as Lemma 16. 7 Experimental results We compare IWS-LS against the methods SRHT-LS [14], ULURU [7]. These competing methods represent current state-of-the-art in fast randomized least squares. Since SRHT-LS is equivalent to LEV-LS [9] the comparison will highlight the difference between importance sampling according to the two difference types of regression diagnostic in the corrupted model. Similar to IWS-LS, ULURU is also a two-step procedure where the first is equivalent to SRHT-LS. The second reduces bias by subtracting the result of regressing onto the residual. The experiments with the corrupted data model will demonstrate the difference in robustness of IWS-LS and ULURU to corruptions in the observations. Note that we do not compare with SGD. Although SGD has excellent properties for large-scale linear regression, we are not aware of a convex loss function which is robust to the corruption model we propose. We assess the empirical performance of our method compared with standard and state-of-the-art randomized approaches to linear regression in several difference scenarios. We evaluate these methods on the basis of the estimation error: the `2 norm of the difference between the true weights and the learned weights, kbβ −βk. We present additional results for root mean squared prediction error (RMSE) on the test set in §SI.7. For all the experiments on simulated data sets we use ntrain = 100, 000, ntest = 1000, p = 500. For datasets of this size, computing exact leverage is impractical and so we report on results for IWS-LS in §SI.7. For aIWS-LS and aRWS-LS we used the same number of sub-samples to approximate the leverage scores and residuals as for solving the regression. For aIWS-LS we set r2 = p/2 (see Eq. (5)). The results are averaged over 100 runs. Corrupted data. We investigate the corrupted data noise model described in Eqs. (1)-(2). We show three scenarios where ⇡= {0.05, 0.1, 0.3}. X and W were sampled from independent, zeromean Gaussians with standard deviation σx = 1 and σw = 0.4 respectively. The true regression coefficients, β were sampled from a standard Gaussian. We added i.i.d. zero-mean Gaussian noise with standard deviation σe = 0.1. Figure 1 shows the difference in distribution of influence and leverage between non-corrupted points (top) and corrupted points (bottom) for a dataset with 30% corrupted points. The distribution of leverage is very similar between the corrupted and non-corrupted points, as quantified by the `1 difference. This suggests that leverage alone cannot be used to identify corrupted points. 7 (a) 5% Corruptions (b) 30% Corruptions (c) Airline delay Figure 2: Comparison of mean estimation error and standard deviation on two corrupted simulated datasets and the airline delay dataset. On the other hand, although there are some corrupted points with small influence, they typically have a much larger influence than non-corrupted points. We give a theoretical explanation of this phenomenon in §SI.3 (remarks 4 and 5). Figure 2(a) and (b) shows the estimation error and the mean squared prediction error for different subsample sizes. In this setting, computing IWS-LS is impractical (due to the exact leverage computation) so we omit the results but we notice that aIWS-LS and aRWS-LS quickly improve over the full least squares solution and the other randomized approximations in all simulation settings. In all cases, influence based methods also achieve lower-variance estimates. For 30% corruptions for a small number of samples ULURU outperforms the other subsampling methods. However, as the number of samples increases, influence based methods start to outperform OLS. Here, ULURU converges quickly to the OLS solution but is not able to overcome the bias introduced by the corrupted datapoints. Results for 10% corruptions are shown in Figs. 5 and 6 and we provide results on smaller corrupted datasets (to show the performance of IWS-LS) as well as non-corrupted data simulated according to [13] in §SI.7. Airline delay dataset The dataset consists of details of all commercial flights in the USA over 20 years. Dataset along with visualisations available from http://stat-computing.org/dataexpo/2009/. Selecting the first ntrain = 13, 000 US Airways flights from January 2000 (corresponding to approximately 1.5 weeks) our goal is to predict the delay time of the next ntest = 5, 000 US Airways flights. The features in this dataset consist of a binary vector representing origin-destination pairs and a real value representing distance (p = 170). The dataset might be expected to violate the usual i.i.d. sub-Gaussian design assumption of standard linear regression since the length of delays are often very different depending on the day. For example, delays may be longer due to public holidays or on weekends. Of course, such regular events could be accounted for in the modelling step, but some unpredictable outliers such as weather delay may also occur. Results are presented in Figure 2(c), the RMSE is the error in predicted delay time in minutes. Since the dataset is smaller, we can run IWS-LS to observe the accuracy of aIWS-LS and aRWS-LS in comparison. For more than 3000 samples, these algorithm outperform OLS and quickly approach IWS-LS. The result suggests that the corrupted observation model is a good model for this dataset. Furthermore, ULURU is unable to achieve the full accuracy of the OLS solution. 8 Conclusions We have demonstrated theoretically and empirically under the generalised corrupted observation model that influence weighted subsampling is able to significantly reduce both the bias and variance compared with the OLS estimator and other randomized approximations which do not take influence into account. Importantly our fast approximation, aRWS-LS performs similarly to IWS-LS. We find ULURU quickly converges to the OLS estimate, although it is not able to overcome the bias induced by the corrupted datapoints despite its two-step procedure. The performance of IWS-LS relative to OLS in the airline delay problem suggests that the corrupted observation model is a more realistic modelling scenario than the standard sub-Gaussian design model for some tasks. Software is available at http://people.inf.ethz.ch/kgabriel/software.html. Acknowledgements. We thank David Balduzzi, Cheng Soon Ong and the anonymous reviewers for invaluable discussions, suggestions and comments. 8 References [1] Nir Ailon and Edo Liberty. Fast dimension reduction using rademacher series on dual bch codes. In 19th Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1–9, 2008. [2] David A Belsley, Edwin Kuh, and Roy E Welsch. Regression Diagnostics. Identifying Influential Data and Sources of Collinearity. Wiley, 1981. [3] Christos Boutsidis and Alex Gittens. Improved matrix algorithms via the Subsampled Randomized Hadamard Transform. 2012. arXiv:1204.0062v4 [cs.DS]. [4] Yudong Chen and Constantine Caramanis. Orthogonal Matching Pursuit with Noisy and Missing Data: Low and High Dimensional Results. June 2012. arXiv:1206.0823. [5] Yudong Chen and Constantine Caramanis. Noisy and Missing Data Regression: DistributionOblivious Support Recovery. In International Conference on Machine Learning, 2013. [6] Yudong Chen, Constantine Caramanis, and Shie Mannor. Robust Sparse Regression under Adversarial Corruption. In International Conference on Machine Learning, 2013. [7] P Dhillon, Y Lu, D P Foster, and L Ungar. New Subsampling Algorithms for Fast Least Squares Regression. In Advances in Neural Information Processing Systems, 2013. [8] Petros Drineas, Malik Magdon-Ismail, Michael W Mahoney, and David P Woodruff. Fast approximation of matrix coherence and statistical leverage. September 2011. arXiv:1109.3843v2 [cs.DS]. [9] Petros Drineas, Michael W. Mahoney, and S. Muthukrishnan. Sampling algorithms for l2 regression and applications. In Proceedings of the Seventeenth Annual ACM-SIAM Symposium on Discrete Algorithm, SODA ’06, pages 1127–1136, New York, NY, USA, 2006. ACM. [10] Petros Drineas, Michael W Mahoney, S Muthukrishnan, and Tam´as Sarl´os. Faster least squares approximation. Numerische Mathematik, 117(2):219–249, 2011. [11] Daniel Hsu, Sham Kakade, and Tong Zhang. A tail inequality for quadratic forms of subgaussian random vectors. Electron. Commun. Probab., 17:no. 52, 1–6, 2012. [12] Po-Ling Loh and Martin J Wainwright. High-dimensional regression with noisy and missing data: Provable guarantees with nonconvexity. The Annals of Statistics, 40(3):1637–1664, June 2012. [13] Ping Ma, Michael W Mahoney, and Bin Yu. A Statistical Perspective on Algorithmic Leveraging. In proceedings of the International Conference on Machine Learning, 2014. [14] Michael W Mahoney. Randomized algorithms for matrices and data. April 2011. arXiv:1104.5557v3 [cs.DS]. [15] Brian McWilliams and Giovanni Montana. Multi-view predictive partitioning in high dimensions. Statistical Analysis and Data Mining, 5(4):304–321, 2012. [16] Brian McWilliams and Giovanni Montana. Subspace clustering of high-dimensional data: a predictive approach. Data Mining and Knowledge Discovery, 28:736–772, 2014. [17] Joel A Tropp. Improved analysis of the subsampled randomized Hadamard transform. November 2010. arXiv:1011.1595v4 [math.NA]. [18] Roman Vershynin. Introduction to the non-asymptotic analysis of random matrices. November 2010. arXiv:1011.3027. [19] Roy E Welsch. Regression sensitivity analysis and bounded-influence estimation. In Evaluation of econometric models, pages 153–167. Academic Press, 1980. 9
|
2014
|
161
|
5,249
|
Incremental Local Gaussian Regression Franziska Meier1 fmeier@usc.edu Philipp Hennig2 phennig@tue.mpg.de Stefan Schaal1,2 sschaal@usc.edu 1University of Southern California Los Angeles, CA 90089, USA 2Max Planck Institute for Intelligent Systems Spemannstraße 38, T¨ubingen, Germany Abstract Locally weighted regression (LWR) was created as a nonparametric method that can approximate a wide range of functions, is computationally efficient, and can learn continually from very large amounts of incrementally collected data. As an interesting feature, LWR can regress on non-stationary functions, a beneficial property, for instance, in control problems. However, it does not provide a proper generative model for function values, and existing algorithms have a variety of manual tuning parameters that strongly influence bias, variance and learning speed of the results. Gaussian (process) regression, on the other hand, does provide a generative model with rather black-box automatic parameter tuning, but it has higher computational cost, especially for big data sets and if a non-stationary model is required. In this paper, we suggest a path from Gaussian (process) regression to locally weighted regression, where we retain the best of both approaches. Using a localizing function basis and approximate inference techniques, we build a Gaussian (process) regression algorithm of increasingly local nature and similar computational complexity to LWR. Empirical evaluations are performed on several synthetic and real robot datasets of increasing complexity and (big) data scale, and demonstrate that we consistently achieve on par or superior performance compared to current state-of-the-art methods while retaining a principled approach to fast incremental regression with minimal manual tuning parameters. 1 Introduction Besides accuracy and sample efficiency, computational cost is a crucial design criterion for machine learning algorithms in real-time settings, such as control problems. An example is the modeling of robot dynamics: The sensors in a robot can produce thousands of data points per second, quickly amassing a coverage of the task related workspace, but what really matters is that the learning algorithm incorporates this data in real time, as a physical system can not necessarily stop and wait in its control – e.g., a biped would simply fall over. Thus, a learning method in such settings should produce a good local model in fractions of a second, and be able to extend this model as the robot explores new areas of a very high dimensional workspace that can often not be anticipated by collecting “representative” training data. Ideally, it should rapidly produce a good (local) model from a large number N of data points by adjusting a small number M of parameters. In robotics, local learning approaches such as locally weighted regression [1] have thus been favored over global approaches such as Gaussian process regression [2] in the past. Local regression models approximate the function in the neighborhood of a query point x∗. Each local model’s region of validity is defined by a kernel. Learning the shape of that kernel [3] is the key component of locally weighted learning. Schaal & Atkeson [4] introduced a non-memory-based version of LWR to compress large amounts of data into a small number of parameters. Instead of keeping data in memory and constructing local models around query points on demand, their 1 algorithm incrementally compresses data into M local models, where M grows automatically to cover the experienced input space of the data. Each local model can have its own distance metric, allowing local adaptation to local characteristics like curvature or noise. Furthermore, each local model is trained independently, yielding a highly efficient parallelizable algorithm. Both its local adaptiveness and its low computation cost (linear, O(NM)) has made LWR feasible and successful in control learning. The downside is that LWR requires several tuning parameters, whose optimal values can be highly data dependent. This is at least partly a result of the strongly localized training, which does not allow models to ‘coordinate’, or to benefit from other local models in their vicinity. Gaussian process regression (GPR) [2], on the other hand, offers principled inference for hyperparameters, but at high computational cost. Recent progress in sparsifying Gaussian processes [5, 6] has resulted in computationally efficient variants of GPR . Sparsification is achieved either through a subset selection of support points [7, 8] or through sparsification of the spectrum of the GP [9, 10]. Online versions of such sparse GPs [11, 12, 13] have produced a viable alternative for real-time model learning problems [14]. However, these sparse approaches typically learn one global distance metric, making it difficult to fit the non-stationary data encountered in robotics. Moreover, restricting the resources in a GP also restricts the function space that can be covered, such that with the need to cover a growing workspace, the accuracy of learning with naturally diminish. Here we develop a probabilistic alternative to LWR that, like GPR, has a global generative model, but is locally adaptive and retains LWRs fast incremental training. We start in the batch setting, where rethinking LWRs localization strategy results in a loss function coupling local models that can be modeled within the Gaussian regression framework (Section 2). Modifying and approximating the global model, we arrive at a localized batch learning procedure (Section 3), which we term Local Gaussian Regression (LGR). Finally, we develop an incremental version of LGR that processes streaming data (Section 4). Previous probabilistic formulations of local regression [15, 16, 17] are bottom-up constructions—generative models for one local model at a time. Ours is a top-down approach, approximating a global model to give a localized regression algorithm similar to LWR. 2 Background Locally weighted regression (LWR) with a fixed set of M local models minimizes the loss function L(w) = N ∑ n=1 M ∑ m=1 ηm(xn)(yn −ξm(xn)T wm)2 = M ∑ m=1 L(wm). (1) The right hand side decomposes L(w) into independent losses for M models. We assume each model has K local feature functions ξmk(x), so that the m-th model’s prediction at x is fm(x) = K ∑ k=1 ξmk(x)wmk = ξm(x)⊺wm (2) K = 2,ξm1(x) = 1,ξm2(x) = (x −cm) gives a linear model around cm. Higher polynomials can be used, too, but linear models have a favorable bias-variance trade-off [18]. The models are localized by a non-negative, symmetric and integrable weighting ηm(x), typically the radial basis function ηm(x) = exp[−(x −cm)2 2λ2m ], or ηm(x) = exp[−1 2(x −cm)Λ−1 m (x −cm)⊺] (3) for x ∈RD, with center cm and length scale λm or positive definite metric Λm. ηm(xn) localizes the effect of errors on the least-squares estimate of wm—data points far away from cm have little effect. The prediction y∗at a test point x∗is a normalized weighted average of the local predictions y∗,m: y∗= ∑M m=1 ηm(x∗)fm(x∗) ∑M m=1 ηm(x∗) (4) LWR effectively trains M linear models on M separate datasets ym(xn) = √ ηm(xn)yn. These models differ from the one of Eq. (4), used at test time. This smoothes discontinuous transitions between models, but also means that LWR can not be cast probabilistically as one generative model for training and test data simultaneously. (This holds for any bottom-up construction that learns local 2 N M yn φn m ηn m ξm w βy N M yn f n m ηn m ξn m βfm βy wm Figure 1: Left: Bayesian linear regression with M feature functions φn m = φm(xn) = ηn mξn m, where ηn m can be a function localizing the effect of the mth input function ξn m towards the prediction of yn. Right: Latent variables f n m placed between the features and yn decouple the M regression parameters wm and effectively create M local models connected only through the latent f n m. models independently and combines them as above, e.g., [15, 16]). The independence of local models is key to LWR’s training: changing one local model does not affect the others. While this lowers cost, we believe it is also partially responsible for LWR’s sensitivity to manually tuned parameters. Here, we investigate a different strategy to achieve localization, aiming to retain the computational complexity of LWR, while adding a sense of globality. Instead of using ηm to localize the training error of data points, we localize a model’s contribution ˆym = ξ(x)T wm towards the global fit of training point y, similar to how LWR operates during test time (Eq.4). Thus, already during training, local models must collaborate to fit a data point ˆy = ∑m=1 ηm(x)ξ(x)T wm. Our loss function is L(w) = N ∑ n=1 (yn − M ∑ m=1 ηm(xn)ξm(xn)T wm) 2 = N ∑ n=1 (yn − M ∑ m=1 φm(xn)T wm) 2 , (5) combining the localizer ηm(xn) and the mth input function ξm(xn) to form the feature φm(xn) = ηm(xn)ξm(xn). This form of localization couples all local models, as in classical radial basis function networks [19]. At test time, all local predictions form a joined prediction y∗= M ∑ m=1 y∗m = M ∑ m=1 φm(x∗)T wm (6) This loss can be minimized through a regularized least-square estimator for w (the concatenation of all wm). We follow the probabilistic interpretation of least-squares estimation as inference on the weights w, from a Gaussian prior p(w) = N(w;µ0,Σ0) and likelihood p(y ∣φ,w) = N(y;φ⊺w,β−1 y I). The probabilistic formulation has additional value as a generative model for all (training and test) data points y, which can be used to learn hyperparameters (Figure 1, left). The posterior is p(w ∣y,φ) = N (w;µN,ΣN) with (7) µN = (Σ−1 0 + βyΦ⊺Φ)−1(βyΦ⊺y −Σ−1 0 µ0) and ΣN = (Σ−1 0 + βyΦ⊺Φ)−1 (8) (Heteroscedastic data will be addressed below). The prediction for f(x∗) with features φ(x∗) =∶φ∗ is also Gaussian, with p(f(x∗)∣y,φ) = N(f(x∗);φ∗µN,φ∗ΣNφ⊺ ∗). As is widely known, this framework can be extended nonparametrically by a limit that replaces all inner products φ(xi)Σ0φ(xj)⊺with a Mercer (positive semi-definite) kernel k(xi,xj), corresponding to a Gaussian process prior. The direct connection between Gaussian regression and the elegant theory of Gaussian processes is a conceptual strength. The main downside, relative to LWR, is computational cost: Calculating the posterior (7) requires solving the least-squares problem for all F parameters w jointly, by inverting the Gram matrix (Σ−1 0 + βyΦ⊺Φ). In general, this requires O(F 3) operations. Below we propose approximations to lower the computational cost of this operation to a level comparable to LWR, while retaining the probabilistic interpretation, and the modeling robustness of the full Gaussian model. 3 Local Parametric Gaussian Regression The above shows that Gaussian regression with features φm(x) = ηm(x)ξm(x) can be interpreted as global regression with M models, where ηm(xn) localizes the contribution of the model ξm(x) towards the joint prediction of yn. The choice of local parametric model ξm is essentially free. Local 3 linear regression in a K-dimensional input space takes the form ξm(xn) = xn −cm, and can be viewed as the analog of locally weighted linear regression. Locally constant models ξm(x) = 1 correspond to Gaussian regression with RBF features. Generalizing to M local models with K parameters each, feature function φn mk combines the kth component of the local model ξkm(xn), localized by the m-th weighting function ηm(xn) φn mk ∶= φmk(xn) = ηm(xn)ξkm(xn). (9) Treating mk as indices of a vector ∈RMK, Equation (7) gives localized linear Gaussian regression. Since it will become necessary to prune the model, we adopt the classic idea of automatic relevance determination [20, 21] using a factorizing prior p(w∣A) = M ∏ m=1 N(wm;0,A−1 m ) with Am = diag(αm1,...,αmK). (10) Thus every component k of local model m has its own precision, and can be pruned out by setting αmk _∞. Section 3.1 assumes a fixed number M of local models with fixed centers cm. The parameters are θ = {βy,{αmk},{λmd}}, where K is the dimension of local model ξ(x) and D is the dimension of input x. We propose an approximation for estimating θ. Section 4 then describes an incremental algorithm allocating local models as needed, adapting M and cm. 3.1 Learning in Local Gaussian Regression Exact Gaussian regression with localized features still has cubic cost. However, because of the localization, correlation between distant local models approximately vanishes, and inference is approximately independent between local models. To use this near-independence for cheap local approximate inference, similar to LWR, we introduce a latent variable f n m for each local model m and datum xn, as in probabilistic backfitting [22]. Intuitively, the f form approximate local targets, against which the local parameters fit (Figure 1, right). Moreover, as formalized below, each f n m has its own variance parameter, which re-introduces the ability to model hetereoscedastic data. This modified model motivates a factorizing variational bound (Section 3.1.1). Rendering the local models computationally independent, it allows for fast approximate inference in the local Gaussian model. Hyperparameters can be learned by approximate maximum likelihood (Section 3.1.2), i.e. iterating between constructing a bound q(z ∣θ) on the posterior over hidden variables z (defined below) given current parameter estimates θ and optimizing q with respect to θ. 3.1.1 Variational Bound The complete data likelihood of the modified model (Figure 1, right) is p(y,f,w ∣Φ,θ) = N ∏ n=1 N(yn;f n,βy) N ∏ n=1 M ∏ m=1 N(f n m;φn mwm,βfm) M ∏ m=1 N(wm;0,Am) (11) Our Gaussian model involves the latent variables w and f, the precisions β = {βy,βf1,...,βfM} and the model parameters λm,cm. We treat w and f as probabilistic variables and estimate θ = {β,λ,c}. On w,f, we construct a variational bound q(w,f) imposing factorization q(w,f) = q(w)q(f). The variational free energy is a lower bound on the log evidence for the observations y: log p(y ∣θ) ≥∫q(w,f)log p(y,w,f ∣θ) q(w,f) . (12) This bound is maximized by the q(w,f) minimizing the relative entropy DKL[q(w,f)∥p(w,f ∣y,θ)], the distribution for which log q(w) = Ef[log p(y ∣f,w)p(w,f)] and log q(f) = Ew[log p(y ∣f,w)p(w,f)]. It is relatively easy to show (e.g. [23]) that these distributions are Gaussian in both w and f.The approximation on w is log q(w) = Ef [ N ∑ n=1 log p(f n ∣φn,w) + log p(w ∣A)] = log M ∏ m=1 N(wm;µwm,Σwm) (13) where Σwm = (βfm N ∑ n=1 φn mφn m T + Am) −1 ∈RK×K and µwm = βfmΣwm ( N ∑ n=1 φn mE [f n m]) ∈RK×1 (14) 4 The posterior update equations for the weights are local: each of the local models updates its parameters independently. This comes at the cost of having to update the belief over the variables f n m, which achieves a coupling between the local models. The Gaussian variational bound on f is log q(f n) = Ew [log p(yn ∣f n,βy) + log p(f n ∣φn m,w)] = N(f n;µfn,Σf), (15) where Σf = B−1 −B−11(β−1 y + 1T B−11)−11T B−1 = B−1 −B−111T B−1 β−1 y + 1T B−11 (16) µf n m = Ew [wm]T φn m β−1 fm β−1 y + ∑M m=1 β−1 fm (yn − M ∑ m=1 Ew [wm]T φn m) (17) and B = diag (βf1,...,βfM). µf n m is the posterior mean of the m-th model’s virtual target for data point n. These updates can be performed in O(MK). Note how the posterior over hidden variables f couples the local models, allowing for a form of message passing between local models. 3.1.2 Optimizing Hyperparameters To set the parameters θ = {βy,{βfm,λm}M m=1,{αmk}}, we maximize the expected complete log likelihood under the variational bound Ef,w[log p(y,f,w ∣Φ,θ)] = Ef,w{ N ∑ n=1 [log N (yn; M ∑ m=1 f n m,β−1 y ) + M ∑ m=1 log N(f n m;wT mφn m,β−1 fm)] + M ∑ m=1 log N(wm;0,A−1 m )}. (18) Setting the gradient of this expression to zero leads to the following update equations for the variances β−1 y = 1 N N ∑ n=1 (yn −1µfn)2 + 1T Σf1 (19) β−1 fm = 1 N N ∑ n=1 [(µfnm −µwmφn m)2 + φn m T Σwmφn m] + σ2 fm (20) α−1 mk = µ2 wmk + Σw,kk (21) The gradient with respect to the scales of each local model is completely localized ∂Ef,w [log p(y,f,w ∣Φ,θ)] ∂λmd = ∂Ef,w [∑N n=1 N(f n m;wT mφn m,β−1 fm)] ∂λmd (22) We use gradient ascent to optimize the length scales λmd. All necessary equations are of low cost and, with the exception of the variance 1/βy, all hyper-parameter updates are solved independently for each local model, similar to LWR. In contrast to LWR, however, these local updates do not cause a potential catastrophic shrinking in the length scales: In LWR, both inputs and outputs are weighted by the localizing function, thus reducing the length scale improves the fit. The localization in Equation (22) only affects the influence of regression model m, but the targets still need to be fit accordingly. Shrinking of local models only happens if it actually improves the fit against the unweighted targets fnm such that no complex cross validation procedures are required. 3.1.3 Prediction Predictions at a test point x∗arise from marginalizing over both f and w, using ∫[∫N(y∗;1T f ∗,β−1 y )N(f ∗;W T φ(x∗),B−1)df∗] N(w;µw,Σw)dw = N (y∗;∑ m wT mφ∗ m,σ2(x∗)) (23) where σ2(x∗) = β−1 y + ∑M m=1 β−1 fm + ∑M m=1 φ∗ m T Σwmφ∗ m, which is linear in M and K. 5 4 Incremental Local Gaussian Regression The above approximate posterior updates apply in the batch setting, assuming the number M and locations c of local models are fixed. This section constructs an online algorithm for incrementally incoming data, creating new local models when needed. There has been recent interest in variational online algorithms for efficient learning on large data sets [24, 25]. Stochastic variational inference [24] operates under the assumption that the data set has a fixed size N and optimizes the variational lower bound for N data points via stochastic gradient descent. Here, we follow algorithms for streaming datasets of unknown size. Probabilistic methods in this setting typically follow a Bayesian filtering approach [26, 25, 27] in which the posterior after n −1 data points becomes the prior for the n-th incoming data point. Following this principle we extend the model presented in Section 3 and treat precision variables {βfm,αmk} as random variables, assuming Gamma priors p(βfm) = G(βfm ∣aβ 0,bβ 0) and p(αm) = ∏K k=1 G(αmk ∣aα 0 ,bα 0 ). Thus, the factorized approximation on the posterior q(z) over all random variables z = {f,w,α,βf} is changed to q(z) = q(f,w,βf,α) = q(f)q(w)q(βf)q(α) (24) A batch version of this was introduced in [28]. Given that, the recursive application of Bayes’ theorem results in the approximate posterior p(z∣x1,...,xn) ≈p(xn ∣z)q(z ∣x1,...xn−1) (25) after n data points. In essence, this formulates the (approximate) posterior updates in terms of sufficient statistics, which are updated with each new incoming data point. The batch updates (listed in [28]) can be rewritten such that they depend on the following sufficient statistics ∑N n=1 φn mφn m ⊺,∑n=1 φn mµn fm and ∑n=1(µn fm)2. Although the length-scales λm could be treated as random variables too, here we update them using the noisy (stochastic) gradients produced by each incoming data point. Due to space limitations, we only summarize these update equations in the algorithm below, where we have replaced the expectation operator by ⟨⋅⟩. Finally, we use an extension analogous to incremental training of the relevance vector machine [29] to iteratively add local models at new, greedily selected locations cM+1. Starting with one local model, each iteration adds one local model in the variational step, and prunes out existing local models for which all components αmk _∞. This works well in practice, with the caveat that the model number M can grow fast initially, before the pruning becomes effective. Thus, we check for each selected location cM+1 whether any of the existing local models c1∶M produces a localizing weight ηm(cM+1) ≥wgen, where wgen is a parameter between 0 and 1 and regulates how many parameters are added. Algorithm 1 gives an overview of the entire incremental algorithm. Algorithm 1 Incremental LGR 1: M = 0;C = {},aα 0 ,bα 0 ,aβ 0,ββ 0 ,forgetting rate κ, learning rate ν 2: for all (xn,yn) do // for each data point 3: if ηm(xn) < wgen,∀m = 1,...,M then cm ^xn; C ^C ∪{cm}; M = M + 1 end if 4: Σf = B−1 − B−111T B−1 β−1 y +∑m⟨β⟩fm , µf n m = µT wmφn m β−1 fm β−1 y +∑M m=1⟨β⟩−1 fm (yn −∑M m=1 µT wmφn m) 5: for m = 1 to M do 6: if ηm(xn) < 0.01 then continue end if 7: SφmφT m ^κSφmφ⊺ m + φn mφn m ⊺, Sφmµfm ^κSφmµfm + φn mµf n m, Sµ2 fm ^κSµ2 fm + µ2 f n m 8: Σwm = (⟨β⟩fmSφmφT m + ⟨A⟩m) −1 , µwm = ⟨β⟩fmΣwmSφmµfm 9: Nm = κNm + 1, aβ Nm = aβ 0 + Nm, aα Nm = aα 0 + 0.5 10: bβ Nm = Sµ2 fn m −2µ⊺ wmSφmµfm + tr[Sφmφ⊺ m(Σwm + µwmµ⊺ wm)] + Nmσ2 fm 11: bα Nmk = µ2 wm,k + Σwm,kk 12: ⟨β⟩fm = aβ Nm/bβ Nm, ⟨A⟩m = diag (aα Nmk/bα Nmk) 13: λm = λm + ν(∂/∂λmN(⟨f n⟩m;⟨w⟩T mφn m,⟨β⟩−1 fm)) 14: if ⟨α⟩mk > 1e3 ∀k = 1,...,K then prune local model m, M ^ M −1 end if 15: end for 16: end for 6 Table 1: Datasets for inverse dynamics tasks: KUKA1, KUKA2 are different splits of the same data. Rightmost column indicates the overlap in input space coverage between offline (ISoffline) and online training (ISonline) sets. Dataset freq Motion Noffline train Nonline train Ntest ISoffline ∪ISonline Sarcos [2] 100 rhythmic 4449 44484 large overlap KUKA1 500 rhythmic at various speeds 17560 180360 small overlap KUKA2 500 rhythmic at various speeds 17560 180360 no overlap KUKAsim 500 rhythmic + discrete 1984950 20050 5 Experiments We evaluate our LGR on inverse dynamics learning tasks, using data from two robotic platforms: a SARCOS anthropomorphic arm and a KUKA lightweight arm. For both robots, learning the inverse dynamics involves learning a map from the joint positions q (rad), velocities ˙q (rad/s) and accelerations ¨q (rad/s2), to torques τ (Nm) for each of 7 joints (degrees of freedom). We compare to two methods previously used for inverse dynamics learning: LWPR1 – an extension of LWR for high dimensional spaces [31] – and I-SSGPR2 [13] – an incremental version of Sparse Spectrum GPR. I-SSGPR differs from LGR and LWPR in that it is a global method and does not learn the distance metric online. Instead, I-SSGPR needs offline training of hyperparameters before it can be used online. We mimic the procedure used in [13]: An offline training set is used to learn an initial model and hyperparameters, then an online training set is used to evaluate incremental learning. Where indicated we use initial offline training for all three methods. I-SSGPR uses typical GPR optimization procedures for offline training, and is thus only available in batch mode. For LGR, we use the batch version for pre-training/hyperparameter learning. For all experiments we initialized the length scales to λ = 0.3, and used wgen = 0.3 for both LWPR and LGR. We evaluate on four different data sets, listed in Table 1. These sets vary in scale, types of motion and how well the offline training set represents the data encountered during online learning. All results were averaged over 5 randomly seeded runs, mean-squared error (MSE) and normalized mean-squared error (nMSE) are reported on the online training dataset. The nMSE is reported as the mean-squared error normalized by the variance of the outputs. Table 2: Predictive performance on online training data of Sarcos after one sweep. I-SSGPR has been trained with 200(400) features, MSE for 400 features is reported in brackets. I-SSGPR200(400) LWPR LGR Joint MSE nMSE MSE nMSE # of LM MSE nMSE # of LM J1 13.699 (10.832) 0.033 19.180 0.046 461.4 11.434 0.027 321.4 J2 6.158 (4.788) 0.027 9.783 0.044 495.0 8.342 0.037 287.4 J3 1.803 (1.415) 0.018 3.595 0.036 464.6 2.237 0.023 298.0 J4 1.198 (0.857) 0.006 4.807 0.025 382.8 5.079 0.027 303.2 J5 0.034 (0.027) 0.036 0.071 0.075 431.2 0.031 0.033 344.2 J6 0.129 (0.096) 0.044 0.248 0.085 510.2 0.101 0.034 344.2 J7 0.093 (0.063) 0.014 0.231 0.034 378.8 0.170 0.025 348.8 Sarcos: Table 2 summarizes results on the popular Sarcos benchmark for inverse dynamics learning tasks [2]. The traditional test set is used as the offline training data to pre-train all three models. I-SSGPR is trained with 200 and 400 sparse spectrum features, indicated as I-SSGPR200(400), where 200 features is the optimal design choice according to [13]. We report the (normalized) mean-squared error on the online training data, after one sweep through it - i.e. each data point has been used once has been performed. All three methods perform well on this data, with I-SSGPR and LGR having a slight edge over LWPR in terms of accuracy; and LGR uses fewer local models than LWPR. The Sarcos data offline training set represents the data encountered during online training very well. Thus, here online distance metric learning is not necessary to achieve good performance. 1we use the LWPR implementation found in the SL simulation software package [30] 2we use code from the learningMachine library in the RobotCub framework, from http:// eris.liralab.it/iCub 7 Table 3: Predictive performance on online training data of KUKA1 and KUKA2 after one sweep. KUKA2 results are averages across joints. I-SSGPR was trained on 200 and 400 features (results for I-SSGPR400 shown in brackets). I-SSGPR200(400) LWPR LGR data Joint MSE nMSE MSE nMSE # of LM MSE nMSE # of LM KUKA1 J1 7.021 (7.680) 0.233 2.362 0.078 3476.8 2.238 0.074 3188.6 J2 16.385 (18.492) 0.265 2.359 0.038 3508.6 2.738 0.044 3363.8 J3 1.872 (1.824) 0.289 0.457 0.071 3477.2 0.528 0.082 3246.6 J4 3.124 (3.460) 0.256 0.503 0.041 3494.6 0.571 0.047 3333.6 J5 0.095 (0.143) 0.196 0.019 0.039 3512.4 0.017 0.036 3184.4 J6 0.142 (0.296) 0.139 0.043 0.042 3561.0 0.029 0.029 3372.4 J7 0.129 (0.198) 0.174 0.023 0.031 3625.6 0.033 0.044 3232.6 KUKA2 9.740 (9.985) 0.507 1.064 0.056 3617.7 1.012 0.054 3290.2 5 ⋅105 1 ⋅106 1.5 ⋅106 0.02 0.04 0.06 n nMSE LGR LWPR 5 ⋅105 1 ⋅106 1.5 ⋅106 14,000 15,000 16,000 17,000 n M Figure 2: Right: nMSE on the first joint of simulated KUKA arm Left: average number of local models used. KUKA1 and KUKA2: The two KUKA datasets consist of rhythmic motions at various speeds, and represent a more realistic setting in robotics: While one can collect some data for offline training, it is not feasible to cover the whole state-space. Offline data of KUKA1 has been chosen to give partial coverage of the range of available speeds, while KUKA2 consists of motion at only one speed. In this setting, both LWPR and LGR excel (Table 3). As they can learn local distance metrics on the fly, they adapt to incoming data in previously unexplored input areas. Performance of I-SSGPR200 degrades as the offline training data is less representative, while LGR and LWPR perform almost equally well on KUKA1 and KUKA2. While there is little difference in accuracy between LGR and LWPR, LGR consistently uses fewer local models and does not require careful manual meta-parameter tuning. Since both LGR and LWPR use more local models on this data (compared to the Sarcos data) we also tried increasing the feature space of I-SSGPR to 400 features. This did not improve I-SSGPRs performance on the online data (see Table 3). Finally, it is noteworthy that LGR processes both of these data sets at ∼500Hz (C++ code, on a 3.4GHz Intel Core i7), making it a realistic alternative for real-time inverse dynamics learning tasks. KUKAsim : Finally, we evaluate LGRs ability to learn from scratch on KUKAsim, a large data set of 2 million simulated data points, collected using [30]. We randomly drew 1% points as a test set, on which we evaluate convergence during online training. Figure 2 (left) shows convergence and number of local models used, averaged over 5 randomly seeded runs for joint 1. After the first 1e5 data points, both LWPR and LGR achieve a normalized mean squared error below 0.07, and eventually converge to a nMSE of ∼0.01. LGR converges slightly faster, while using fewer local models (Figure 2, right). 6 Conclusion We proposed a top-down approach to probabilistic localized regression. Local Gaussian Regression decouples inference over M local models, resulting in efficient and principled updates for all parameters, including local distance metrics. These localized updates can be used in batch as well as incrementally, yielding computationally efficient learning in either case and applicability to big data sets. Evaluated on a variety of simulated and real robotic inverse dynamics tasks, and compared to I-SSGPR and LWPR, incremental LGR shows an ability to add resources (local models) and to update its distance metrics online. This is essential to consistently achieve high accuracy. Compared to LWPR, LGR matches or improves precision, while consistently using fewer resources (local models) and having significantly fewer manual tuning parameters. 8 References [1] Christopher G Atkeson, Andrew W Moore, and Stefan Schaal. Locally weighted learning for control. Artificial Intelligence Review, (1-5):75–113, 1997. [2] Carl Edward Rasmussen and Christopher KI Williams. Gaussian Processes for Machine Learning. MIT Press, 2006. [3] Jianqing Fan and Irene Gijbels. Data-driven bandwidth selection in local polynomial fitting: variable bandwidth and spatial adaptation. Journal of the Royal Statistical Society., pages 371–394, 1995. [4] Stefan Schaal and Christopher G Atkeson. Constructive incremental learning from only local information. Neural Computation, 10(8):2047–2084, 1998. [5] Joaquin Qui˜nonero Candela and Carl Edward Rasmussen. A unifying view of sparse approximate Gaussian process regression. JMLR, 6:1939–1959, 2005. [6] Krzysztof Chalupka, Christopher KI Williams, and Iain Murray. A framework for evaluating approximation methods for Gaussian process regression. JMLR, 14(1):333–350, 2013. [7] Michalis K Titsias. Variational learning of inducing variables in sparse Gaussian processes. In International Conference on Artificial Intelligence and Statistics, pages 567–574, 2009. [8] Edward Snelson and Zoubin Ghahramani. Sparse Gaussian processes using pseudo-inputs. Advances in neural information processing systems, 18:1257, 2006. [9] Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In NIPS, 2007. [10] Miguel L´azaro-Gredilla, Joaquin Qui˜nonero-Candela, Carl Edward Rasmussen, and An´ıbal R FigueirasVidal. Sparse spectrum Gaussian process regression. JMLR, 11:1865–1881, 2010. [11] Marco F Huber. Recursive Gaussian process: On-line regression and learning. Pattern Recognition Letters, 45:85–91, 2014. [12] Lehel Csat´o and Manfred Opper. Sparse on-line Gaussian processes. Neural computation, 2002. [13] Arjan Gijsberts and Giorgio Metta. Real-time model learning using incremental sparse spectrum Gaussian process regression. Neural Networks, 41:59–69, 2013. [14] James Hensman, Nicolo Fusi, and Neil D Lawrence. Gaussian processes for big data. UAI, 2013. [15] Jo-Anne Ting, Mrinal Kalakrishnan, Sethu Vijayakumar, and Stefan Schaal. Bayesian kernel shaping for learning control. Advances in neural information processing systems, 6:7, 2008. [16] Duy Nguyen-Tuong, Jan R Peters, and Matthias Seeger. Local Gaussian process regression for real time online model learning. In Advances in Neural Information Processing Systems, pages 1193–1200, 2008. [17] Edward Snelson and Zoubin Ghahramani. Local and global sparse Gaussian process approximations. In International Conference on Artificial Intelligence and Statistics, pages 524–531, 2007. [18] Trevor Hastie and Clive Loader. Local regression: Automatic kernel carpentry. Statistical Science, 1993. [19] J. Moody and C. Darken. Learning with localized receptive fields. In Proceedings of the 1988 Connectionist Summer School, pages 133–143. San Mateo, CA, 1988. [20] Radford M Neal. Bayesian Learning for Neural Network, volume 118. Springer, 1996. [21] Michael E Tipping. Sparse Bayesian learning and the relevance vector machine. The Journal of Machine Learning Research, 1:211–244, 2001. [22] Aaron D’Souza, Sethu Vijayakumar, and Stefan Schaal. The Bayesian backfitting relevance vector machine. In ICML, 2004. [23] Martin J Wainwright and Michael I Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends® in Machine Learning, 2008. [24] Matthew D. Hoffman, David M. Blei, Chong Wang, and John Paisley. Stochastic variational inference. J. Mach. Learn. Res., 14(1):1303–1347, May 2013. [25] Tamara Broderick, Nicholas Boyd, Andre Wibisono, Ashia C Wilson, and Michael Jordan. Streaming variational Bayes. In Advances in Neural Information Processing Systems, pages 1727–1735, 2013. [26] Jan Luts, Tamara Broderick, and Matt Wand. Real-time semiparametric regression. arxiv, 2013. [27] Antti Honkela and Harri Valpola. On-line variational Bayesian learning. In 4th International Symposium on Independent Component Analysis and Blind Signal Separation, pages 803–808, 2003. [28] Franziska Meier, Philipp Hennig, and Stefan Schaal. Efficient Bayesian local model learning for control. In Proceedings of the IEEE International Conference on Intelligent Robotics Systems (IROS), 2014. [29] Joaquin Qui˜nonero-Candela and Ole Winther. Incremental Gaussian processes. In NIPS, 2002. [30] Stefan Schaal. The SL simulation and real-time control software package. Technical report, 2009. [31] Sethu Vijayakumar and Stefan Schaal. Locally weighted projection regression: Incremental real time learning in high dimensional space. In ICML, pages 1079–1086, 2000. 9
|
2014
|
162
|
5,250
|
Controlling privacy in recommender systems Yu Xin CSAIL, MIT yuxin@mit.edu Tommi Jaakkola CSAIL, MIT tommi@csail.mit.edu Abstract Recommender systems involve an inherent trade-off between accuracy of recommendations and the extent to which users are willing to release information about their preferences. In this paper, we explore a two-tiered notion of privacy where there is a small set of “public” users who are willing to share their preferences openly, and a large set of “private” users who require privacy guarantees. We show theoretically and demonstrate empirically that a moderate number of public users with no access to private user information already suffices for reasonable accuracy. Moreover, we introduce a new privacy concept for gleaning relational information from private users while maintaining a first order deniability. We demonstrate gains from controlled access to private user preferences. 1 Introduction Recommender systems exploit fragmented information available from each user. In a realistic system there’s also considerable “churn”, i.e., users/items entering or leaving the system. The core problem of transferring the collective experience of many users to an individual user can be understood in terms of matrix completion ([13, 14]). Given a sparsely populated matrix of preferences, where rows and columns of the matrix correspond to users and items, respectively, the goal is to predict values for the missing entries. Matrix completion problems can be solved as convex regularization problems, using trace norm as a convex surrogate to rank. A number of algorithms are available for solving large-scale tracenorm regularization problems. Such algorithms typically operate by iteratively building the matrix from rank-1 components (e.g., [7, 17]). Under reasonable assumptions (e.g., boundedness, noise, restricted strong convexity), the resulting empirical estimators have been shown to converge to the underlying matrix with high probability ([12, 8, 2]). Consistency guarantees have mostly involved matrices of fixed dimension, i.e., generalization to new users is not considered. In this paper, we reformulate the regularization problem in a manner that depends only on the item (as opposed to user) features, and characterize the error for out-of-sample users. The completion accuracy depends directly on the amount of information that each user is willing to share with the system ([1]). It may be possible in some cases to side-step this statistical trade-off by building Peer-to-Peer networks with homomorphic encryption that is computationally challenging([3, 11]). We aim to address the statistical question directly. The statistical trade-off between accuracy and privacy further depends on the notion of privacy we adopt. A commonly used privacy concept is Differential Privacy (DP) ([6]), first introduced to protect information leaked from database queries. In a recommender context, users may agree to a trusted party to hold and aggregate their data, and perform computations on their behalf. Privacy guarantees are then sought for any results published beyond the trusted party (including back to the users). In this setting, differential privacy can be achieved through obfuscation (adding noise) without a significant loss of accuracy ([10]). 1 In contrast to [10], we view the system as an untrusted entity, and assume that users wish to guard their own data. We depart from differential privacy and separate computations that can be done locally (privately) by individual users and computations that must be performed by the system (e.g., aggregation). For example, in terms of low rank matrices, only the item features have to be solved by the system. The corresponding user features can be obtained locally by the users and subsequently used for ranking. From this perspective, we divide the set of users into two pools, the set of public users who openly share their preferences, and the larger set of private users who require explicit privacy guarantees. We show theoretically and demonstrate empirically that a moderate number of public users suffice for accurate estimation of item features. The remaining private users can make use of these item features without any release of information. Moreover, we propose a new 2nd order privacy concept which uses limited (2nd order) information from the private users as well, and illustrate how recommendations can be further improved while maintaining marginal deniability of private information. 2 Problem formulation and summary of results Recommender setup without privacy Consider a recommendation problem with n users and m items. The underlying complete rating matrix to be recovered is ˚ X ∈Rn×m. If only a few latent factors affect user preferences, ˚ X can be assumed to have low rank. As such, it is also recoverable from a small number of observed entries. We assume that entries are observed with noise. Specifically, Yi,j = ˚ Xi,j + ϵi,j, (i, j) ∈Ω (1) where Ωdenotes the set of observed entries. Noise is assumed to be i.i.d and follows a zeromean sub-Gaussian distribution with parameter ∥ϵ∥ψ2 = σ. Following [16], we refer to the noise distribution as Sub(σ2). To bias our estimated rating matrix X to have low rank, we use convex relaxation of rank in the form of trace norm. The trace-norm is the sum of singular values of the matrix or ∥X∥∗= P i σi(X). The basic estimation problem, without any privacy considerations, is then given by min X∈Rm×n 1 N X (i,j)∈Ω (Yi,j −Xi,j)2 + λ √mn∥X∥∗ (2) where λ is a regularization parameter and N = |Ω| is the total number of observed ratings. The factor √mn ensures that the regularization does not grow with either dimension. The above formulation requires the server to explicitly obtain predictions for each user, i.e., solve for X. We can instead write X = UV T and Σ = (1/√mn)V V T , and solve for Σ only. If the server then communicates the resulting low rank Σ (or just V ) to each user, the users can reconstruct the relevant part of U locally, and reproduce X as it pertains to them. To this end, let φi = {j : (i, j) ∈ Ω} be the set of observed entries for user i, and let Yi,φi be a column vector of user i’s ratings. Then we can show that Eq.(2) is equivalent to solving min Σ∈S+ n X i=1 Y T i,φi(λ′I + Σφi,φi)Yi,φi + √nm ∥Σ∥∗ (3) where S+ is the set of positive semi-definite m × m matrices and λ′ = λN/√nm. By solving ˆΣ, we can predict ratings for unobserved items (index set φc i for user i) by ˆXi,φc i = Σφc i ,φi(λ′I + Σφi,φi)−1Yi,φi (4) Note that we have yet to address any privacy concerns. The solution to Eq.(3) still requires access to full ratings Yi,φi for each user. Recommender setup with privacy Our privacy setup assumes an untrusted server. Any user interested in guarding their data must therefore keep and process their data locally, releasing information to the server only in a controlled manner. We will initially divide users into two broad 2 categories, public and private. Public users are willing to share all their data with the server while private users are unwilling to share any. This strict division is removed later when we permit private users to release, in a controlled manner, limited information pertaining to their ratings (2nd order information) so as to improve recommendations. Any data made available to the server enables the server to model the collective experience of users, for example, to solve Eq.(3). We will initially consider the setting where Eq.(3) is solved on the basis of public users only. We use an EM type algorithm for training. In the E-step, the current Σ is sent to public users to complete their rating vectors and send back to the server. In the M-step, Σ is then updated based on these full rating vectors. The resulting ˆΣ (or ˆV ) can be subsequently shared with the private users, enabling the private users (their devices) to locally rank candidate items without any release of private information. The estimation of ˆΣ is then improved by asking private users to share 2nd order relational information about their ratings without any release of marginal selections/ratings. Note that we do not consider privacy beyond ratings. In other words, we omit any subsequent release of information due to users exploring items recommended to them. Summary of contributions We outline here our major contributions towards characterizing the role of public users and the additional controlled release of information from private users. 1) We show that ˚Σ = p ˚ XT ˚ X/√nm can be estimated in a consistent, accurate manner on the basis of public users alone. In particular, we express the error ∥ˆΣ −˚Σ∥F as a function of the total number of observations. Moreover, if the underlying public user ratings can be thought of as i.i.d. samples, we also bound ∥˚Σ −Σ∗∥F in terms of the number of public users. Here Σ∗is the true limiting estimate. See section 3.1 for details. 2) We show how the accuracy of predicted ratings ˆXi,φc i for private users relates to the accuracy of estimating ˆΣ (primarily from public users). Since the ratings for user i may not be related to the subspace that ˆΣ lies in, we can only characterize the accuracy when sufficient overlap exists. We quantify this overlap, and show how ∥ˆXi,φc i −˚ Xi,φc i ∥depends on this overlap, accuracy of ˆΣ, and the observation noise. See section 3.2 for details. 3) Having established the accuracy of predictions based on public users alone, we go one step further and introduce a new privacy mechanism and algorithms for gleaning additional relational (2nd order) information from private users. This 2nd order information is readily used by the server to estimate ˆΣ. The privacy concept constructively maintains first order (marginal) deniability for private users. We demonstrate empirically the gains from the additional 2nd order information. See section 4. 3 Analysis 3.1 Statistical Consistency of ˆΣ Let ˆX be a solution to Eq.(2). We can write ˆX = ˆU ˆV T , where ˆU T ˆU = ˆIm with 0/1 diagonal. Since ˆΣ = 1 √mn p ˆXT ˆX we can first analyze errors in ˆX and then relate them to ˆΣ. To this end, we follow the restricted strong convexity (RSC) analysis[12]. However, their result depends on the inverse of the minimum number of ratings of all users and items. In practice (see below), the number of ratings decays exponentially across sorted users, making such a result loose. We provide a modified analysis that depends only on the total number of observations N. Throughout the analysis, we assume that each row vector ˚ Xi,· belongs to a fixed r dimensional subspace. We also assume that both noiseless and noisy entries are bounded, i.e. |Yi,j|, | ˚ Xi,j| ≤ α, ∀(i, j). For brevity, we use ∥Y −X∥2 Ωto denote the empirical loss P (i,j)∈Ω(Yi,j −Xi,j)2 . The restricted strong convexity property (RSC) assumes that there exists a constant κ > 0 such that κ mn∥ˆX −˚ X∥2 F ≤1 N ∥ˆX −˚ X∥2 Ω (5) 3 for ˆX −˚ X in a certain subset. RSC provides the step from approximating observations to approximating the full underlying matrix. It is satisfied with high probability provided that N = (m + n) log(m + n)). Assume the SVD of ˚ X = ˚ PS ˚ QT , and let row(X) and col(X) denote the row and column spaces of X. We define the following two sets, A(P, Q) := {X, row(X) ⊆˚ P, col(X) ⊆˚ Q} B(P, Q) := {X, row(X) ⊆˚ P ⊥, col(X) ⊆˚ Q⊥} (6) Let πA(X) and πB(X) be the projection of X onto sets A and B, respectively, and πA = I −πA, πB = I −πB. Let ∆= ˆX −˚ X be the difference between the estimated and the underlying rating matrices. Our first lemma demonstrates that ∆lies primarily in a restricted subspace and the second one guarantees that the noise remains bounded. Lemma 3.1. Assume ϵi,j for (i, j) ∈Ωare i.i.d. sub-gaussian with σ = ∥ϵi,j∥ψ1. Then with probability 1 − e N 4ch , ∥πB(∆)∥∗≤∥πB(∆)∥∗+ 2c2σ2√mn Nλ log2 N. Here h > 0 is an absolute constant associated with the sub-gaussian noise. If λ = λ0cσ log2 N √ N , then c2σ2√mn log N Nλ = cσ log N λ0 p mn N = b log Np n N where we leave the dependence on n explicit. Let D(b, n, N) denote the set of difference matrices that satisfy lemma 3.1 above. By combining the lemma and the RSC property, we obtain the following theorem. Theorem 3.2. Assume RSC for the set D(b, n, N) with parameter κ > 0 where b = cσ√m λ0 . Let λ = λ0cσ log N √ N , then we have 1 √mn∥∆∥F ≤2cσ( 1 √κ + √ 2r κ ) log N √ N with probability at least 1− e N 4ch where h, c > 0 are constants. The bound in the theorem consists of two terms, pertaining to the noise and the regularization. In contrast to [12], the terms only relate to the total number of observations N. We now turn our focus on the accuracy of ˆΣ. First, we map the accuracy of ˆX to that of ˆΣ using a perturbation bound for polar decomposition (see [9]). Lemma 3.3. If 1 √mn∥ˆX −˚ X∥F ≤δ, then ∥ˆΣ −˚Σ∥F ≤ √ 2δ This completes our analysis in terms of recovering ˚Σ for a fixed size underlying matrix ˚ X. As a final step, we turn to the question of how the estimation error changes as the number of users or n grows. Let ˚ Xi be the underlying rating vector for user i and define Θn = 1 mn Pn i=1 ˚ XT i ˚ Xi. Then ˚Σ = (Θn) 1 2 . If Θ∗is the limit of Θn, then Σ∗= (Θ∗) 1 2 . We bound the distance between ˚Σ and Σ∗. Theorem 3.4. Assume ˚ Xi are i.i.d samples from a distribution with support only in a subspace of dimension r and bounded norm ∥˚ Xi∥≤α√m. Let β1 and βr be the smallest and largest eigenvalues of Σ∗. Then, for large enough n, with probability at least 1 − r n2 , ∥˚Σ −Σ∗∥F ≤2√rα s βr log n β1n + o(log n n ) (7) Combining the two theorems and using triangle inequality, we obtain a high probability bound on ∥ˆΣ −Σ∗∥F . Assuming the number of ratings for each user is larger than ξm, then N > ξnm and the bound grows in the rate of η(log n/√n) with η being a constant that depends on ξ. For large enough ξ, the required n to achieve a certain error bound is small. Therefore a few public users with large number of ratings could be enough to obtain a good estimate of Σ∗. 3.2 Prediction accuracy We are finally ready to characterize the error in the predicted ratings ˆXi,φc i for all users as defined in Eq.(4). For brevity, we use δ to represent the bound on ∥ˆΣ−Σ∗∥obtained on the basis of our results above. We also use xφ and xφc as shorthands for Xi,φi and Xi,φc i with the idea that xφ typically refers to a new private user. 4 The key issue for us here is that the partial rating vector xφ may be of limited use. For example, if the number of observed ratings is less than rank r, then we would be unable to identify a rating vector in the r dimensional subspace even without noise. We seek to control this in our analysis by assuming that the observations have enough signal to be useful. Let SVD of Σ∗be Q∗S∗(Q∗)T , and β1 be its minimum eigenvalue. We constrain the index set of observations φ such that it belongs to the set D(γ) = φ ⊆{1, . . . , m}, s.t.∥x∥2 F ≤γ m |φ|∥xφ∥2 F , ∀x ∈row((Q∗)T ) The parameter γ depends on how the low dimensional sub-space is aligned with the coordinate axes. We are only interested in characterizing prediction errors for observations that are in D(γ). This is quite different from the RSC property. Our main result is then Theorem 3.5. Suppose ∥Σ −Σ∗∥F ≤δ ≪β1, φ ∈D(γ). For any ˚x ∈row((Q∗)T ), our observation xφ = ˚xφ + ϵφ where ϵφ ∼Sub(σ2) is the noise vector. The predicted ratings over the remaining entries are given by ˆxφc = Σφc,φ(λ′I + Σφ,φ)−1xφ. Then, with probability at least 1 −exp(−c2 min(c4 1, p |φ|c2 1)), ∥xφc −˚xφc∥F ≤2 √ λ′ + δ( r γ m |φ| + 1)(∥˚x∥F √β1 + 2c1σ|φ| 1 4 √ λ′ ) where c1, c2 > 0 are constants. All the proofs are provided in the supplementary material. The term proportional to ∥˚x∥F /√β1 is due to the estimation error of Σ∗, while the term proportional to 2c1σ|φ| 1 4 / √ λ′ comes from the noise in the observed ratings. 4 Controlled privacy for private users Our theoretical results already demonstrate that a relatively small number of public users with many ratings suffices for a reasonable performance guarantee for both public and private users. Empirical results (next section) support this claim. However, since public users enjoy no privacy guarantees, we would like to limit the required number of such users by requesting private users to contribute in a limited manner while maintaining specific notions of privacy. Definition 4.1. : Privacy preserving mechanism. Let M : Rm×1 →Rm×1 be a random mechanism that takes a rating vector r as input and outputs M(r) of the same dimension with jth element M(r)j. We say that M(r) is element-wise privacy preserving if Pr(M(r)j = z) = p(z) for j = 1, ..., m, and some fixed distribution p. For example, a privacy preserving mechanism M(r) is element-wise private if its coordinates follow the same marginal distribution such as uniform. Note that such a mechanism can still release information about how different ratings interact (co-vary) which is necessary for estimation. Discrete values. Assume that each element in r and M(r) belongs to a discrete set S with |S| = K. A natural privacy constraint is to insist that the marginal distribution of M(r)j is uniform, i.e., Pr(M(r)j = z) = 1/K, for z ∈S. A trivial mechanism that satisfies the privacy constraint is to select each value uniformly at random from S. In this case, the returned rating vector contributes nothing to the server model. Our goal is to design a mechanism that preserves useful 2nd order information. We assume that a small number of public user profiles are available, from which we can learn an initial model parameterized by (µ, V ), where µ is the item mean vector and V is a low rank component of Σ. The server provides each private user the pair (µ, V ) and asks, once, for a response M(r). Here r is the user’s full rating vector, completed (privately) with the help of the server model (µ, V ). The mechanism M(r) is assumed to be element-wise privacy preserving, thus releasing nothing about a single element in isolation. What information should it carry? If each user i provided their full rating vector ri, the server could estimate Σ according to 1 √nm(Pn i=1(ri−µ)(ri−µ)T ) 1 2 . Thus, 5 if M(r) preserves the second order statistics to the extent possible, the server could still obtain an accurate estimate of Σ. Consider a particular user and their completed rating vector r. Let P(x) = Pr(M(r) = x). We select distribution P(x) by solving the following optimization problem geared towards preserving interactions between the ratings under the uniform marginal constraint. min P Ex∼P∥(x −µ)(x −µ)T −(r −µ)(r −µ)T ∥2 F s.t. P(xi = s) = 1/K, ∀i, ∀s ∈S. (8) where K = |S|. The exact solution is difficult to obtain because the number of distinct assignments of x is Km. Instead, we consider an approximate solution. Let x1, ..., xK ∈Rm×1 be K different vectors such that, for each i, {x1 i , x2 i , ..., xK i } forms a permutation of S. If we choose x with Pr(x = xj) = 1/K, then the marginal distribution of each element is uniform therefore maintaining element-wise privacy. It remains to optimize the set x1, ..., xK to capture interactions between ratings. We use a greedy coordinate descent algorithm to optimize x1, ..., xK. For each coordinate i, we randomly select a pair xp and xq, and switch xp i and xq i if the objective function in (8) is reduced. The process is repeated a few times before we move on to the next coordinate. The algorithm can be implemented efficiently because each operation deals only with a single coordinate. Finally, according to the mechanism, the private user selects one of xj, j = 1, . . . , K, uniformly at random and sends the discrete vector back to the server. Since the resulting rating vectors from private users are noisy, the server decreases their weight relative to the information from public users as part of the overall M-step for estimating Σ. Continuous values. Assuming the rating values are continuous and unbounded, we require instead that the returned rating vectors follow the marginal distributions with a given mean and variance. Specifically, E[M(r)i] = 0 and Var[M(r)i] = v where v is a constant that remains to be determined. Note that, again, any specific element of the returned vector will not, in isolation, carry any information specific to the element. As before, we search for the distribution P so as to minimize the L2 error of the second order statistics under marginal constraints. For simplicity, denote r′ = r−µ where r is the true completed rating vector, and ui = M(r)i. The objective is given by min P,v Eu∼P∥uuT −r′r′T ∥2 F s.t. E[ui] = 0, Var[ui] = v, ∀i. (9) Note that the formulation does not directly constrain that P has identical marginals, only that the means and variances agree. However, the optimal solution does, as shown next. Theorem 4.2. Let zi = sign(r′ i) and h = (Pm i=1 |r′ i|)/m. The minimizing distribution of (9) is given by Pr(u = zh) = Pr(u = −zh) = 1/2. We leave the proof in the supplementary material. A few remarks are in order. The mechanism in this case is a two component mixture distribution, placing a probability mass on sign(r′)h (vectorized) and −sign(r′)h with equal probability. As a result, the server, knowing the algorithm that private users follow, can reconstruct sign(r′) up to an overall randomly chosen sign. Note also that the value of h is computed from user’s private rating vector and therefore releases some additional information about r′ = r −µ albeit weakly. To remove this information altogether, we could use the same h for all users and estimate it based on public users. The privacy constraints will clearly have a negative impact on the prediction accuracy in comparison to having direct access to all the ratings. However, the goal is to improve accuracy beyond the public users alone by obtaining limited information from private users. While improvements are possible, the limited information surfaces in several ways. First, since private users provide no first order information, the estimation of mean rating values cannot be improved beyond public users. Second, the sampling method we use to enforce element-wise privacy adds noise to the aggregate second order information from which V is constructed. Finally, the server can run the M-step with respect to the private users’ information only once, whereas the original EM algorithm could entertain different completions for user ratings iteratively. Nevertheless, as illustrated in the next section, the algorithm can still achieve a good accuracy, improving with each additional private user. 6 5 Experiments We perform experiments on the Movielens 10M dataset which contains 10 million ratings from 69878 users on 10677 movies. The test set contains 10 ratings for each user. We begin by demonstrating that indeed a few public users suffice for making accurate recommendations. Figure 1 left shows the test performance of both weighted (see [12]) and unweighted (uniform) trace norm regularization as we add users. Here users with most ratings are added first. 0 0.2 0.4 0.6 0.8 1 0.86 0.87 0.88 0.89 0.9 0.91 0.92 0.93 0.94 0.95 0.96 Percentage of Users Test RMSE Uniform Weighted 200 400 600 800 1000 1200 1400 1600 1800 2000 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5 Number of ratings (k) Test RMSE Most ratings Random Figure 1: Left: Test RMSE as a function of the percentage of public users; Right: Performance changes with different rating numbers. With only 1% of public users added, the test RMSE of unweighted trace norm regularization is 0.876 which is already close to the optimal prediction error. Note that the loss of weighted trace norm regularization actually starts to go up when the number of users increases. The reason is that the weighted trace norm penalizes less for users with few ratings. As a result, the resulting low dimensional subspace used for prediction is influenced more by users with few ratings. The statistical convergence bound in section 3.1 involves both terms that decrease as a function of the number of ratings N and the number of public users n. The two factors are usually coupled. It is interesting to see how they impact performance individually. Given a number of total ratings, we compare two different methods of selecting public users. In the first method, users with most ratings are selected first, whereas the second method selects users uniformly at random. As a result, if we equalize the total number of ratings from each method, the second method selects a lot more users. Figure 1 Right shows that the second method achieves better performance. An interpretation, based on the theory, is that the right side of error bound (7) decreases as the number of users increases. We also show how performance improves based on controlled access to private user preferences. First, we take the top 100 users with the most ratings as the public users, and learn the initial prediction model from their ratings. To highlight possible performance gains, private users with more ratings are selected first. The results remain close if we select private users uniformly. The rating values are from 0.5 to 5 with totally 10 different discrete values. Following the privacy mechanism for discrete values, each private user generates ten different candidate vectors and returns one of them uniformly at random. In the M-step, the weight for each private user is set to 1/2 compared to 1 for public users. During training, after processing w = 20 private users, we update parameters (µ, V ), re-complete the rating vectors of public users, making predictions for next batch of private users more accurate. The privacy mechanism for continuous values is also tested under the same setup. We denote the two privacy mechanism as PMD and PMC, respectively. We compare five different scenarios. First, we use a standard DP method that adds Laplace noise to the completed rating vector. Let the DP parameter be ϵ, because the maximum difference between rating values is 4.5, the noise follows Lap(0, 4.5/ϵ). As before, we give a smaller weight to the noisy rating vectors and this is determined by cross validation. Second, [5] proposed a notion of “local privacy” in which differential privacy is guaranteed for each user separately. An optimal strategy for d-dimensional multinomial distribution in this case reduces effective samples from n to nϵ2/d where ϵ is the DP parameter. In our case the dimension corresponds to the number of items 7 0 50 100 150 200 250 300 350 400 0.87 0.875 0.88 0.885 0.89 0.895 0.9 0.905 0.91 0.915 0.92 Number of ’’private’’ users Test RMSE PMC PMD Lap eps=1 Lap eps=5 SSLP eps=5 Exact 2nd order Full EM Figure 2: Test RMSE as a function of private user numbers. PMC: the privacy mechanism for continuous values; PMD: the privacy mechanism for discrete values; Lap eps=1: DP with Laplace noise, ϵ = 1; Lap eps=5: same as before except ϵ = 5; SSLP eps=5: sampling strategy described in [4] with DP parameter ϵ = 5; Exact 2nd order: with exact second order statistics from private users (not a valid privacy mechanism); Full EM: EM without any privacy protection. making estimation challenging under DP constraints. We also compare to this method and denote it as SSLP (sampling strategy for local privacy). In addition, to understand how our approximation to second order statistics affects the performance, we also compare to the case that r′a is given to the server directly where a = {−1, 1} with probability 1/2. In this way, the server can obtain the exact second order statistics using r′r′T . Note that this is not a valid privacy preserving mechanism. Finally, we compare to the case that the algorithm can access private user rating vectors multiple times and update the parameters iteratively. Again, this is not a valid mechanism but illustrates how much could be gained. Figure 2 shows the performance as a function of the number of private users. The standard Laplace noise method performs reasonably well when ϵ = 5, however the corresponding privacy guarantee is very weak. SSLP improves the accuracy mildly. In contrast, with the privacy mechanism we defined in section 4 the test RMSE decreases significantly as more private users are added. If we use the exact second order information without the sampling method, the final test RMSE would be reduced by 0.07 compared to PMD. Lastly, full EM without privacy protection reduces the test RMSE by another 0.08. These performance gaps are expected because there is an inherent trade-off between accuracy and privacy. 6 Conclusion Our contributions in this paper are three-fold. First, we provide explicit guarantees for estimating item features in matrix completion problems. Second, we show how the resulting estimates, if shared with new users, can be used to predict their ratings depending on the degree of overlap between their private ratings and the relevant item subspace. The empirical results demonstrate that only a small number of public users with large number of ratings suffices for a good performance. Third, we introduce a new privacy mechanism for releasing 2nd order information needed for estimating item features while maintaining 1st order deniability. The experiments show that this mechanism indeed performs well in comparison to other mechanisms. We believe that allowing different levels of privacy is an exciting research topic. An extension of our work would be applying the privacy mechanism to the learning of graphical models in which 2nd or higher order information plays an important role. 7 Acknowledgement The work was partially supported by Google Research Award and funding from Qualcomm Inc. 8 References [1] M´ario S Alvim, Miguel E Andr´es, Konstantinos Chatzikokolakis, Pierpaolo Degano, and Catuscia Palamidessi. Differential privacy: on the trade-off between utility and information leakage. In Formal Aspects of Security and Trust, pages 39–54. Springer, 2012. [2] E. Candes and Y. Plan. Matrix completion with noise. In Proceedings of the IEEE, 2010. [3] J. Canny. Collaborative filtering with privacy via factor analysis. In SIGIR, 2002. [4] John Duchi, Martin J Wainwright, and Michael Jordan. Local privacy and minimax bounds: Sharp rates for probability estimation. In Advances in Neural Information Processing Systems, pages 1529–1537, 2013. [5] John C Duchi, Michael I Jordan, and Martin J Wainwright. Privacy aware learning. In NIPS, pages 1439–1447, 2012. [6] C. Dwork. Differential privacy: A survey of results. In Theory and Applications of Models of Computation, 2008. [7] M. Jaggi and M. Sulovsk. A simple algorithm for nuclear norm regularized problems. In ICML, 2010. [8] R. Keshavan, A. Montanari, and Sewoong Oh. Matrix completion from noisy entries. JMLR, 2010. [9] R. Mathias. Perturbation bounds for the polar decomposition. BIT Numerical Mathematics, 1997. [10] F. McSherry and I. Mironov. Differentially private recommender systems: Building privacy into the netflix prize contenders. In SIGKDD, 2009. [11] B. N. Miller, J. A. Konstan, and J. Riedl. Pocketlens: Toward a personal recommender system. ACM Trans. Inf. Syst., 2004. [12] S. Negahban and M. J. Wainwright. Restricted strong convexity and weighted matrix completion: optimal bounds with noise. JMLR, 2012. [13] R. Salakhutdinov and N. Srebro. Collaborative filtering in a non-uniform world: Learning with the weighted trace norm. In NIPS, 2010. [14] N. Srebro, J. Rennie, and T. Jaakkola. Maximum margin matrix factorization. In NIPS, 2004. [15] J. A. Tropp. User-friendly tail bounds for sums of random matrices. Found. Comput. Math., 2012. [16] R. Vershynin. Introduction to the non-asymptotic analysis of random matrices. arXiv:1011.3027. [17] Y. Xin and T. Jaakkola. Primal-dual methods for sparse constrained matrix completion. In AISTATS, 2012. 9
|
2014
|
163
|
5,251
|
Localized Data Fusion for Kernel k-Means Clustering with Application to Cancer Biology Mehmet G¨onen gonen@ohsu.edu Department of Biomedical Engineering Oregon Health & Science University Portland, OR 97239, USA Adam A. Margolin margolin@ohsu.edu Department of Biomedical Engineering Oregon Health & Science University Portland, OR 97239, USA Abstract In many modern applications from, for example, bioinformatics and computer vision, samples have multiple feature representations coming from different data sources. Multiview learning algorithms try to exploit all these available information to obtain a better learner in such scenarios. In this paper, we propose a novel multiple kernel learning algorithm that extends kernel k-means clustering to the multiview setting, which combines kernels calculated on the views in a localized way to better capture sample-specific characteristics of the data. We demonstrate the better performance of our localized data fusion approach on a human colon and rectal cancer data set by clustering patients. Our method finds more relevant prognostic patient groups than global data fusion methods when we evaluate the results with respect to three commonly used clinical biomarkers. 1 Introduction Clustering algorithms aim to find a meaningful grouping of the samples at hand in an unsupervised manner for exploratory data analysis. k-means clustering is one of the classical algorithms (Hartigan, 1975), which uses k prototype vectors (i.e., centers or centroids of k clusters) to characterize the data and minimizes a sum-of-squares cost function to find these prototypes with a coordinate descent optimization method. However, the final cluster structure heavily depends on the initialization because the optimization scheme of k-means clustering is prone to local minima. Fortunately, the sum-of-squares minimization can be formulated as a trace maximization problem, which can not be solved easily due to binary decision variables used to denote cluster memberships, but this hard optimization problem can be reduced to an eigenvalue decomposition problem by relaxing the constraints (Zha et al., 2001; Ding and He, 2004). In such a case, overall clustering algorithm can be formulated in two steps: (i) performing principal component analysis (PCA) (Pearson, 1901) on the covariance matrix and (ii) recovering cluster membership matrix using the k eigenvectors that correspond to the k largest eigenvalues. Similar to many other learning algorithms, k-means clustering is also extended towards a nonlinear version with the help of kernel functions, which is called kernel k-means clustering (Girolami, 2002). The kernelized variant can also be optimized with a spectral relaxation approach using kernel PCA (KPCA) (Sch¨olkopf et al., 1998) instead of canonical PCA. In many modern applications, samples have multiple feature representations (i.e., views) coming from different data sources. Instead of using only one of the views, it is better to use all available information and let the learning algorithm decide how to combine these data sources, which is known as multiview learning. There are three main categories for the combination strategy (Noble, 2004): (i) combination at the feature level by concatenating the views (i.e., early integration), (ii) combination at the decision level by concatenating the outputs of learners trained on each view separately (i.e., late integration), and (iii) combination at the learning level by trying to find a unified distance, kernel, or latent matrix using all views simultaneously (i.e., intermediate integration). 1 1.1 Related work When we have multiple views for clustering, we can simply concatenate the views and train a standard clustering algorithm on the concatenated view, which is known as early integration. However, this approach does not assign weights to the views, and the view with the highest number of features might dominate the clustering step due to the unsupervised nature of the problem. Late integration algorithms obtain a clustering on each view separately and combine these clustering results using an ensemble learning scheme. Such clustering algorithms are also known as cluster ensembles (Strehl and Ghosh, 2002). However, they do not exploit the dependencies between the views during clustering, and these dependencies might already be lost if we combine only clustering results in the second step. Intermediate integration algorithms combine the views in a single learning scheme to collectively find a unified clustering. Chaudhuri et al. (2009) propose to extract a unifying feature representation from the views by performing canonical correlation analysis (CCA) (Hotelling, 1936) and to train a clustering algorithm on this common representation. Similarly, Blaschko and Lampert (2008) extract a common feature representation but with a nonlinear projection step using kernel CCA (Lai and Fyfe, 2000) and then perform clustering. Such CCA-based algorithms assume that all views are informative, and if there are some noisy views, this can degrade the clustering performance drastically. Lange and Buhmann (2006) propose to optimize the weights of a convex combination of view-specific similarity measures within a nonnegative matrix factorization framework and to assign samples to clusters using the latent matrices obtained in the factorization step. Valizadegan and Jin (2007) extend the maximum margin clustering formulation of Xu et al. (2004) to perform kernel combination and clustering jointly by formulating a semidefinite programming (SDP) problem. Chen et al. (2007) further improve this idea by formulating a quadratically constrained quadratic programming problem instead of an SDP problem. Tang et al. (2009) convert the views into graphs by placing samples into vertices and creating edges using the similarity values between samples in each view, and then factorize these graphs jointly with a shared factor common to all graphs, which is used for clustering at the end. Kumar et al. (2011) propose a co-regularization strategy for multiview spectral clustering by enforcing agreement between the similarity matrices calculated on the latent representations obtained from the spectral decomposition of each view. Huang et al. (2012) formulate another multiview spectral clustering method that finds a weighted combination of the affinity matrices calculated on the views. Yu et al. (2012) develop a multiple kernel k-means clustering algorithm that optimizes the weights in a conic sum of kernels calculated on the views. However, their formulation uses the same kernel weights for all of the samples. Multiview clustering algorithms have attracted great interest in cancer biology due to the availability of multiple genomic characterizations of cancer patients. Yuan et al. (2011) formulate a patientspecific data fusion algorithm that uses a nonparametric Bayesian model coupled with a Markov chain Monte Carlo inference scheme, which can combine only two views and is computationally very demanding due to the high dimensionality of genomic data. Shen et al. (2012) and Mo et al. (2013) find a shared latent subspace across genomic views and cluster cancer patients using their representations in this subspace. Wang et al. (2014) construct patient networks from patient–patient similarity matrices calculated on the views, combine these into a single unified network using a network fusion approach, and then perform clustering on the final patient network. 1.2 Our contributions Intermediate integration using kernel matrices is also known as multiple kernel learning (MKL) (G¨onen and Alpaydın, 2011). Most of the existing MKL algorithms use the same kernel weights for all samples, which may not be a good idea due to sample-specific characteristics of the data or measurement noise present in some of the views. In this work, we study kernel k-means clustering under the multiview setting and propose a novel MKL algorithm that combines kernels with sample-specific weights to obtain a better clustering. We demonstrate the better performance of our algorithm on the human colon and rectal cancer data set provided by TCGA consortium (The Cancer Genome Atlas Network, 2012), where we use three genomic characterizations of the patients (i.e., DNA copy number, mRNA gene expression, and DNA methylation) for clustering. Our localized data fusion approach obtains more relevant prognostic patient groups than global fusion approaches when we evaluate the results with respect to three commonly used clinical biomarkers (i.e., microsatellite instability, hypermutation, and mutation in BRAF gene) of colon and rectal cancer. 2 2 Kernel k-means clustering We first review kernel k-means clustering (Girolami, 2002) before extending it to the multiview setting. Given N independent and identically distributed samples {xi ∈X}n i=1, we assume that there is a function Φ(·) that maps the samples into a feature space, in which we try to minimize a sum-of-squares cost function over the cluster assignment variables {zic}n,k i=1,c=1. The optimization problem (OPT1) defines kernel k-means clustering as a binary integer programming problem, where nc is the number of samples assigned to cluster c, and µc is the centroid of cluster c. minimize n X i=1 k X c=1 zic∥Φ(xi) −µc∥2 2 with respect to zic ∈{0, 1} ∀(i, c) subject to k X c=1 zic = 1 ∀i where nc = n X i=1 zic ∀c, µc = 1 nc n X i=1 zicΦ(xi) ∀c (OPT1) We can convert this optimization problem into an equivalent matrix-vector form problem as follows: minimize tr ((Φ −M)⊤(Φ −M)) with respect to Z ∈{0, 1}n×k subject to Z1k = 1n where Φ = [Φ(x1) Φ(x2) . . . Φ(xn)], M = ΦZLZ⊤, L = diag (n−1 1 , n−1 2 , . . . , n−1 k ). (OPT2) Using that Φ⊤Φ = K, tr (AB) = tr (BA), and Z⊤Z = L−1, the objective function of the optimization problem (OPT2) can be rewritten as tr ((Φ −M)⊤(Φ −M)) = tr ((Φ −ΦZLZ⊤)⊤(Φ −ΦZLZ⊤)) = tr (Φ⊤Φ −2Φ⊤ΦZLZ⊤+ ZLZ⊤Φ⊤ΦZLZ⊤) = tr (K −2KZLZ⊤+ KZLZ⊤ZLZ⊤) = tr (K −L 1 2 Z⊤KZL 1 2 ), where K is the kernel matrix that holds the similarity values between the samples, and L 1 2 is defined as taking the square root of the diagonal elements. The resulting optimization problem (OPT3) is a trace maximization problem, but it is still very difficult to solve due to the binary decision variables. maximize tr (L 1 2 Z⊤KZL 1 2 −K) with respect to Z ∈{0, 1}n×k subject to Z1k = 1n (OPT3) However, we can formulate a relaxed version of this optimization problem by renaming ZL 1 2 as H and letting H take arbitrary real values subject to orthogonality constraints. maximize tr (H⊤KH −K) with respect to H ∈Rn×k subject to H⊤H = Ik (OPT4) The final optimization problem (OPT4) can be solved by performing KPCA on the kernel matrix K and setting H to the k eigenvectors that correspond to k largest eigenvalues (Sch¨olkopf et al., 1998). We can finally extract a clustering solution by first normalizing all rows of H to be on the unit sphere and then performing k-means clustering on this normalized matrix. Note that, after the normalization step, H contains k-dimensional representations of the samples on the unit sphere, and k-means is not very sensitive to initialization in such a case. 3 3 Multiple kernel k-means clustering In a multiview learning scenario, we have multiple feature representations, where we assume that each representation has its own mapping function, i.e., {Φm(·)}p m=1. Instead of an unweighted combination of these views (i.e., simple concatenation), we can obtain a weighted mapping function by concatenating views using a convex sum (i.e., nonnegative weights that sum up to 1). This corresponds to replacing Φ(xi) with Φθ(xi) = θ1Φ1(xi)⊤ θ2Φ2(xi)⊤ . . . θpΦp(xi)⊤⊤, where θ ∈Rp + is the vector of kernel weights that we need to optimize during training. The kernel function defined over the weighted mapping function becomes kθ(xi, xj) = ⟨Φθ(xi), Φθ(xj)⟩= p X m=1 ⟨θmΦm(xi), θmΦm(xj)⟩= p X m=1 θ2 mkm(xi, xj), where we combine kernel functions using a conic sum (i.e., nonnegative weights), which guarantees to have a positive semi-definite kernel function at the end. The optimization problem (OPT5) gives the trace maximization problem we need to solve. maximize tr (H⊤KθH −Kθ) with respect to H ∈Rn×k, θ ∈Rp + subject to H⊤H = Ik, θ⊤1p = 1 where Kθ = p X m=1 θ2 mKm (OPT5) We solve this problem using a two-step alternating optimization strategy: (i) Optimize H given θ. If we know the kernel weights (or initialize randomly in the first iteration), solving (OPT5) reduces to solving (OPT4) with the combined kernel matrix Kθ, which requires performing KPCA on Kθ. (ii) Optimize θ given H. If we know the eigenvectors from the first step, solving (OPT5) reduces to solving (OPT6), which is a convex quadratic programming (QP) problem with p decision variables and one equality constraint, and is solvable with any standard QP solver up to a moderate number of kernels. minimize p X m=1 θ2 m tr (Km −H⊤KmH) with respect to θ ∈Rp + subject to θ⊤1p = 1 (OPT6) Note that using a convex combination of kernels in (OPT5) is not a viable option because if we set Kθ to Pp m=1 θmKm, there would be a trivial solution to the trace maximization problem with a single active kernel and others with zero weights, which is also observed by Yu et al. (2012). 4 Localized multiple kernel k-means clustering Instead of using the same kernel weights for all samples, we propose to use a localized data fusion approach by assigning sample-specific weights to kernels, which enables us to capture samplespecific characteristics of the data and to get rid of sample-specific noise that may be present in some of the views. In our localized combination approach, the mapping function is represented as ΦΘ(xi) = θi1Φ1(xi)⊤ θi2Φ2(xi)⊤ . . . θipΦp(xi)⊤⊤, where Θ ∈Rn×p + is the matrix of sample-specific kernel weights, which are nonnegative and sum up to 1 for each sample (G¨onen and Alpaydın, 2013). The locally combined kernel function can be written as kΘ(xi, xj) = ⟨ΦΘ(xi), ΦΘ(xj)⟩= p X m=1 ⟨θimΦm(xi), θjmΦm(xj)⟩= p X m=1 θimθjmkm(xi, xj), where we are guaranteed to have a positive semi-definite kernel function. The optimization problem (OPT7) gives the trace maximization problem with the locally combined kernel matrix, where θm ∈ Rn + is the vector of kernel weights assigned to kernel m, and ◦denotes the Hadamard product. 4 maximize tr (H⊤KΘH −KΘ) with respect to H ∈Rn×k, Θ ∈Rn×p + subject to H⊤H = Ik, Θ1p = 1n where KΘ = p X m=1 (θmθ⊤ m) ◦Km (OPT7) We solve this problem using a two-step alternating optimization strategy: (i) Optimize H given Θ. If we know the sample-specific kernel weights (or initialize randomly in the first iteration), solving (OPT7) reduces to solving (OPT4) with the combined kernel matrix KΘ, which requires performing KPCA on KΘ. (ii) Optimize Θ given H. If we know the eigenvectors from the first step, using that tr (A⊤((cc⊤) ◦B)A) = c⊤((AA⊤) ◦B)c, solving (OPT7) reduces to solving (OPT8), which is a convex QP problem with n × p decision variables and n equality constraints. minimize p X m=1 θ⊤ m((In −HH⊤) ◦Km)θm with respect to Θ ∈Rn×p + subject to Θ1p = 1n (OPT8) Training the localized combination approach requires more computational effort than training the global approach due to the increased size of QP problem in the second step. However, the blockdiagonal structure of the Hessian matrix in (OPT8) can be exploited to solve this problem much more efficiently. Note that the objective function of (OPT8) can be written as θ1 θ2 ... θp ⊤ (In −HH⊤) ◦K1 0n×n · · · 0n×n 0n×n (In −HH⊤) ◦K2 · · · 0n×n ... ... ... ... 0n×n 0n×n · · · (In −HH⊤) ◦Kp θ1 θ2 ... θp , where we have an n × n matrix for each kernel on the diagonal of the Hessian matrix. 5 Experiments Clustering patients is one of the clinically important applications in cancer biology because it helps to identify prognostic cancer subtypes and to develop personalized strategies to guide therapy. Making use of multiple genomic characterizations in clustering is critical because different patients may manifest their disease in different genomic platforms due to cancer heterogeneity and measurement noise. We use the human colon and rectal cancer data set provided by TCGA consortium (The Cancer Genome Atlas Network, 2012), which contains several genomic characterizations of the patients, to test our new clustering algorithm in a challenging real-world application. We use DNA copy number, mRNA gene expression, and DNA methylation data of the patients for clustering. In order to evaluate the clustering results, we use three commonly used clinical biomarkers of colon and rectal cancer (The Cancer Genome Atlas Network, 2012): (i) micro-satellite instability (i.e., a hypermutable phenotype caused by the loss of DNA mismatch repair activity) (ii) hypermutation (defined as having mutations in more than or equal to 300 genes), and (iii) mutation in BRAF gene. Note that these three biomarkers are not directly identifiable from the input data sources used. The preprocessed genomic characterizations of the patients can be downloaded from a public repository at https://www.synapse.org/#!Synapse:syn300013, where DNA copy number, mRNA gene expression, DNA methylation, and mutation data consist of 20313, 20530, 24980, and 14581 features, respectively. The micro-satellite instability data can be downloaded from https://tcga-data.nci.nih.gov/tcga/dataAccessMatrix.htm. In the resulting data set, there are 204 patients with available genomic and clinical biomarker data. We implement kernel k-means clustering and its multiview variants in Matlab. Our implementations are publicly available at https://github.com/mehmetgonen/lmkkmeans. We solve the QP problems of the multiview variants using the Mosek optimization software (Mosek, 2014). For all methods, we perform 10 replications of k-means with different initializations as the last step and use the solution with the lowest sum-of-squares cost to decide cluster memberships. 5 We calculate four different kernels to use in our experiments: (i) KC: the Gaussian kernel on DNA copy number data, (ii) KG: the Gaussian kernel on mRNA gene expression data, (iii) KM: the Gaussian kernel on DNA methylation data, and (vi) KCGM: the Gaussian kernel on concatenated data (i.e., early combination). Before calculating each kernel, the input data is normalized to have zero mean and unit standard deviation (i.e., z-normalization for each feature). For each kernel, we set the kernel width parameter to the square root of the number of features in its corresponding view. We compare seven clustering algorithms on this colon and rectal cancer data set: (i) kernel k-means clustering with KC, (ii) kernel k-means clustering with KG, (iii) kernel k-means clustering with KM, (iv) kernel k-means clustering with KCGM, (v) kernel k-means clustering with (KC + KG + KM) / 3, (vi) multiple kernel k-means clustering with (KC, KG, KM), and (vii) localized multiple kernel kmeans clustering with (KC, KG, KM). The first three algorithms are single-view clustering methods that work on a single genomic characterization. The fourth algorithm is the early integration approach that combines the views at the feature level. The fifth and sixth algorithms are intermediate integration approaches that combine the kernels using unweighted and weighted sums, respectively, where the latter is very similar to the formulations of Huang et al. (2012) and Yu et al. (2012). The last algorithm is our localized MKL approach that combines the kernels in a sample-specific way. We assign three different binary labels to each sample as the ground truth using the three clinical biomarkers mentioned and evaluate the clustering results using three different performance metrics: (i) normalized mutual information (NMI), (ii) purity, and (iii) the Rand index (RI). We set the number of clusters to 2 for all of the algorithms because each ground truth label has only two categories. We first show the kernel weights assigned to 204 1.0 0.8 0.6 0.4 0.2 1.0 0.8 0.6 0.4 0.2 1.0 0.8 0.6 0.4 0.2 G G G GG G GG G G G G G GG G G G G G G G GGG G G GG G GG G G G GG G GG G G G GG GG G G GG GG G G G G G G G G G G G GG G G G GG GG G GGG G GGG GG G G GGG GG GG GGGG G G G G G G G G G G G G G G G GG G G G G G G GG G G GGG GG G GG G G G G G G G GG G G GG G G G G G G G GG G GG GG G G GGG G GG G GG GG GG G GG G G G G G G G G G G G GG G GGG GGG GG G G GG GG Gene expression Copy number Methylation Cluster G G 1 2 Figure 1: Kernel weights assigned to patients by our localized data fusion approach. Each dot denotes a single cancer patient, and patients in the same cluster are drawn with the same color. colon and rectal cancer patients by our localized data fusion approach. As we can see from Figure 1, some of the patients are very well characterized by their DNA copy number data. Our localized algorithm assigns weights larger than 0.5 to DNA copy number data for most of the patients in the second cluster, whereas all three views are used with comparable weights for the remaining patients. Note that the kernel weights of each patient are strictly nonnegative and sum up to 1 (i.e., defined on the unit simplex). Our proposed clustering algorithm can identify the most informative genomic platforms in an unsupervised and patient-specific manner. Together with the better clustering performance and biological interpretation presented next, this particular application from cancer biology shows the potential for localized combination strategy. Figure 2 summarizes the results obtained by seven clustering algorithms on the colon and rectal cancer data set. For each algorithm, the cluster assignment and the values of three clinical biomarkers are aligned to each other, and we report the performance values of nine biomarker–metric pairs. We see that DNA copy number (i.e., KC) is the most informative genomic characterization when we compare the performance of single-view clustering algorithms, where it obtains better results than mRNA gene expression (i.e., KG) and DNA methylation (i.e., KM) in terms of NMI and RI on all biomarkers. We also see that the early integration strategy (i.e., KCGM) does not improve the results because mRNA gene expression and DNA methylation dominate the clustering step due to the unsupervised nature of the problem. However, when we combine the kernels using an unweighted combination strategy, i.e., (KC + KG + KM) / 3, the performance values are significantly improved compared to single-view clustering methods and early integration in terms of NMI and RI on all biomarkers. Instead of using an unweighted sum, we can optimize the combination weights using the multiple kernel k-means clustering of Section 3. In this case, the performance values are slightly improved compared to the unweighted sum in terms of NMI and RI on all biomarkers. Our localized data fusion approach significantly outperforms the other algorithms in terms of NMI and RI on “micro-satellite instability” and “hypermutation” biomarkers, and it is the only algorithm that can obtain purity values higher than the ratio of the majority class samples on “mutation in BRAF gene” biomarker. These results validate the benefit of our localized approach for the multiview setting. 6 102 patients 102 patients Algorithm: Clusters: MSI high: Hypermutation: BRAF mutation: Kernel k−means clustering with KC NMI 0.1466 0.1418 0.0459 Purity 0.8676 0.8480 0.8971 RI 0.5376 0.5426 0.5156 117 patients 87 patients Algorithm: Clusters: MSI high: Hypermutation: BRAF mutation: Kernel k−means clustering with KG NMI 0.0504 0.0514 0.0174 Purity 0.8676 0.8480 0.8971 RI 0.5082 0.5091 0.5082 83 patients 121 patients Algorithm: Clusters: MSI high: Hypermutation: BRAF mutation: Kernel k−means clustering with KM NMI 0.0008 0.0049 0.0026 Purity 0.8676 0.8480 0.8971 RI 0.5143 0.5105 0.5143 87 patients 117 patients Algorithm: Clusters: MSI high: Hypermutation: BRAF mutation: Kernel k−means clustering with KCGM NMI 0.0019 0.0127 0.0041 Purity 0.8676 0.8480 0.8971 RI 0.5105 0.5076 0.5105 119 patients 85 patients Algorithm: Clusters: MSI high: Hypermutation: BRAF mutation: Kernel k−means clustering with (KC + KG + KM) / 3 NMI 0.2437 0.2303 0.0945 Purity 0.8676 0.8480 0.8971 RI 0.6009 0.6096 0.5568 122 patients 82 patients Algorithm: Clusters: MSI high: Hypermutation: BRAF mutation: Multiple kernel k−means clustering with (KC, KG, KM) NMI 0.2557 0.2431 0.1013 Purity 0.8676 0.8480 0.8971 RI 0.6141 0.6233 0.5666 158 patients 46 patients Algorithm: Clusters: MSI high: Hypermutation: BRAF mutation: Localized multiple kernel k−means clustering with (KC, KG, KM) NMI 0.3954 0.3788 0.1481 Purity 0.8873 0.8873 0.8971 RI 0.8088 0.8088 0.7114 Figure 2: Results obtained by seven clustering algorithms on the colon and rectal cancer data set provided by TCGA consortium (The Cancer Genome Atlas Network, 2012). For each algorithm, we first display the cluster assignment and report the number of patients in each cluster. We then display the values of three clinical biomarkers aligned with the cluster assignment, where “MSI high” shows the patients with high micro-satellite instability status in darker color, “Hypermutation” shows the patients with mutations in more than or equal to 300 genes in darker color, and “BRAF mutation” shows the patients with a mutation in their BRAF gene in darker color. We compare the algorithms in terms of their clustering performance on three clinical biomarkers under three metrics: normalized mutual information (NMI), purity, and the Rand index (RI). For all performance metrics, a higher value means better performance, and for each biomarker–metric pair, the best result is reported in bold face. We see that our localized clustering algorithm obtains the best result for eight out of nine biomarker–metric pairs, whereas all algorithms have the same purity value for BRAF mutation. 7 Copy number Gene expression Methylation Clusters Mutation Figure 3: Important features in genomic views determined using the solution of multiple kernel k-means clustering together with cluster assignment and mutations in frequently mutated genes. For each genomic view, we calculate the Pearson correlation values between features and clustering assignment, and display topmost 100 positively correlated and bottommost 100 negatively correlated features (red: high, blue: low). We also display the mutation status (black: mutated, white: wildtype) of patients for 102 most frequently mutated genes, which are mutated in at least 16 patients. Copy number Gene expression Methylation Clusters Mutation Figure 4: Important features in genomic views determined using the solution of localized multiple kernel k-means clustering together with cluster assignment and mutations in frequently mutated genes. See Figure 3 for details. We perform an additional biological interpretation step by looking at the features that can be used to differentiate the clusters found. Figures 3 and 4 show features in genomic views that are highly (positively or negatively) correlated with the cluster assignments of the two best performing algorithms in terms of clustering performance, namely, multiple kernel k-means clustering and localized multiple kernel k-means clustering. We clearly see that the genomic signatures of the hyper-mutated cluster (especially the one for DNA copy number) obtained using our localized data fusion approach are much less noisy than those of global data fusion. Identifying clear genomic signatures are clinically important because they can be used for diagnostic and prognostic purposes on new patients. 6 Discussion We introduce a localized data fusion approach for kernel k-means clustering to better capture sample-specific characteristics of the data in the multiview setting, which can not be captured using global data fusion strategies such as Huang et al. (2012) and Yu et al. (2012). The proposed method is from the family of MKL algorithms and combines the kernels defined on the views with samplespecific weights to determine the relative importance of the views for each sample. We illustrate the practical importance of the method on a human colon and rectal cancer data set by clustering patients using their three different genomic characterizations. The results show that our localized data fusion strategy can identify more relevant prognostic patient groups than global data fusion strategies. The interesting topics for future research are: (i) exploiting the special structure of the Hessian matrix in our formulation by developing a customized solver instead of using an off-the-shelf optimization software to improve the time complexity, and (ii) integrating prior knowledge about the samples that we may have into our formulation to be able to find more relevant clusters. Acknowledgments. This study was financially supported by the Integrative Cancer Biology Program (grant no 1U54CA149237) and the Cancer Target Discovery and Development (CTDD) Network (grant no 1U01CA176303) of the National Cancer Institute. 8 References M. B. Blaschko and C. H. Lampert. Correlational spectral clustering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2008. K. Chaudhuri, S. M. Kakada, K. Livescu, and K. Sridharan. Multi-view clustering via canonical correlation analysis. In Proceedings of the 26st International Conference on Machine Learning, 2009. J. Chen, Z. Zhao, J. Ye, and H. Liu. Nonlinear adaptive distance metric learning for clustering. In Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2007. C. Ding and X. He. K-means clustering via principal component analysis. In Proceedings of the 21st International Conference on Machine Learning, 2004. M. Girolami. Mercer kernel-based clustering in feature space. IEEE Transactions on Neural Networks, 13(3): 780–784, 2002. M. G¨onen and E. Alpaydın. Multiple kernel learning algorithms. Journal of Machine Learning Research, 12 (Jul):2211–2268, 2011. M. G¨onen and E. Alpaydın. Localized algorithms for multiple kernel learning. Pattern Recognition, 46(3): 795–807, 2013. J. A. Hartigan. Clustering Algorithms. John Wiley & Sons, Inc., New York, NY, USA, 1975. H. Hotelling. Relations between two sets of variates. Biometrika, 28(3/4):321–327, 1936. H.-C. Huang, Y.-Y. Chuang, and C.-S. Chen. Affinity aggregation for spectral clustering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2012. A. Kumar, P. Rai, and H. Daum´e III. Co-regularized multi-view spectral clustering. In Advances in Neural Information Processing Systems 24, 2011. P. L. Lai and C. Fyfe. Kernel and nonlinear canonical correlation analysis. International Journal of Neural Systems, 10(5):365–377, 2000. T. Lange and J. M. Buhmann. Fusion of similarity data in clustering. In Advances in Neural Information Processing Systems 18, 2006. Q. Mo, S. Wang, V. E. Seshan, A. B. Olshen, N. Schultz, C. Sander, R. S. Powers, M. Ladanyi, and R. Shen. Pattern discovery and cancer gene identification in integrated cancer genomic data. Proceedings of the National Academy of Sciences of the United States of America, 110(11):4245–4250, 2013. Mosek. The MOSEK Optimization Tools Manual Version 7.0 (Revision 134). MOSEK ApS, Denmark, 2014. W. S. Noble. Support vector machine applications in computational biology. In B. Sch¨olkopf, K. Tsuda, and J.-P. Vert, editors, Kernel Methods in Computational Biology, chapter 3. The MIT Press, 2004. K. Pearson. On lines and planes of closest fit to systems of points in space. Philosophical Magazine, 2(11): 559–572, 1901. B. Sch¨olkopf, A. Smola, and K.-R. M¨uller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10(5):1299–1319, 1998. R. Shen, Q. Mo, N. Schultz, V. E. Seshan, A. B. Olshen, J. Huse, M. Ladanyi, and C. Sander. Integrative subtype discovery in glioblastoma using iCluster. PLoS ONE, 7(4):e35236, 2012. A. Strehl and J. Ghosh. Cluster ensembles – A knowledge reuse framework for combining multiple partitions. Journal of Machine Learning Research, 3(Dec):583–617, 2002. W. Tang, Z. Lu, and I. S. Dhillon. Clustering with multiple graphs. In Proceedings of the 9th IEEE International Conference on Data Mining, 2009. The Cancer Genome Atlas Network. Comprehensive molecular characterization of human colon and rectal cancer. Nature, 487(7407):330–337, 2012. H. Valizadegan and R. Jin. Generalized maximum margin clustering and unsupervised kernel learning. In Advances in Neural Information Processing Systems 19, 2007. B. Wang, A. M. Mezlini, F. Demir, M. Flume, Z. Tu, M. Brudno, B. Haibe-Kains, and A. Goldenberg. Similarity network fusion for aggregating data types on a genomic scale. Nature Methods, 11(3):333–337, 2014. L. Xu, J. Neufeld, B. Larson, and D. Schuurmans. Maximum margin clustering. In Advances in Neural Information Processing Systems 17, 2004. S. Yu, L.-C. Tranchevent, X. Liu, W. Gl¨anzel, J. A. K. Suykens, B. De Moor, and Y. Moreau. Optimized data fusion for kernel k-means clustering. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34 (5):1031–1039, 2012. Y. Yuan, R. S. Savage, and F. Markowetz. Patient-specific data fusion defines prognostic cancer subtypes. PLoS Computational Biology, 7(10):e1002227, 2011. H. Zha, X. He, C. Ding, H. Simon, and M. Gu. Spectral relaxation for K-means clustering. In Advances in Neural Information Processing Systems 14, 2001. 9
|
2014
|
164
|
5,252
|
Object Localization based on Structural SVM using Privileged Information Jan Feyereisl, Suha Kwak∗, Jeany Son, Bohyung Han Dept. of Computer Science and Engineering, POSTECH, Pohang, Korea thefillm@gmail.com, {mercury3,jeany,bhhan}@postech.ac.kr Abstract We propose a structured prediction algorithm for object localization based on Support Vector Machines (SVMs) using privileged information. Privileged information provides useful high-level knowledge for image understanding and facilitates learning a reliable model even with a small number of training examples. In our setting, we assume that such information is available only at training time since it may be difficult to obtain from visual data accurately without human supervision. Our goal is to improve performance by incorporating privileged information into ordinary learning framework and adjusting model parameters for better generalization. We tackle object localization problem based on a novel structural SVM using privileged information, where an alternating loss-augmented inference procedure is employed to handle the term in the objective function corresponding to privileged information. We apply the proposed algorithm to the Caltech-UCSD Birds 200-2011 dataset, and obtain encouraging results suggesting further investigation into the benefit of privileged information in structured prediction. 1 Introduction Object localization is often formulated as a binary classification problem, where a learned classifier determines the existence or absence of a target object within a candidate window of every location, size, and aspect ratio. Recently, a structured prediction technique using Support Vector Machine (SVM) has been applied to this problem [1], where the optimal bounding box containing target object is obtained by a trained classifier. This approach provides a unified framework for detection and post-processing (non-maximum suppression), and handles issues related to the object with variable aspect ratios naturally. However, object localization is an inherently difficult task due to the large amount of variations in objects and scenes, e.g., shape deformations, color variations, pose changes, occlusion, view point changes, background clutter, etc. This issue is aggravated when the size of training dataset is small. More reliable model can be learned even with fewer training examples if additional high-level knowledge about an object of interest is available during training. Such high-level knowledge is called privileged information, which typically describes useful semantic properties of an object such as parts, attributes, and segmentations. This idea corresponds to the Learning Using Privileged Information (LUPI) paradigm [3], which exploits the additional information to improve predictive models in training but does not require the information for prediction. The LUPI framework has been incorporated into SVM in the form of the SVM+ algorithm [4]. However, the applications of SVM+ are often limited to binary classification problems [3, 4]. We propose a novel Structural SVM using privileged information (SSVM+) framework, shown in Figure 1, and apply the algorithm to the problem of object localization. In this formulation, privileged information, e.g., parts, attributes and segmentations, are incorporated to learn a structured ∗Current affiliation: INRIA–WILLOW Project, Paris, France; e-mail: suha.kwak@inria.fr 1 Groundtruth (yi) xi* : Segmentation/Part/Attributes xi : Image SSVM+ Learning Model Prediction Output (y) x : Image Training example Testing example arg max 𝑦∗ ∆ (𝑦𝑖, 𝑦𝑐, 𝑦∗) + 𝑤∗, Ψ∗𝑦∗ arg max 𝑦 ∆ (𝑦𝑖, 𝑦, 𝑦𝑐∗) + 𝑤, Ψ 𝑦 Privileged space Visual space y* 𝑦𝑐∗ 𝑦𝑐 Loss-Augmented Inference by ESS Localization Alternating optimization … … … … … … … Keypoints Visual descriptors (SURF) Vocabulary … Histogram Bag-of-Words Features y Figure 1: Overview of our object localization framework using privileged information. Unlike visual observations, privileged information is available only during training. We use attributes and segmentation masks of an object as privileged information to improve generalization of trained model. To incorporate privileged information during training, we propose an extension of SSVM, called SSVM+, whose loss-augmented inference is performed by alternating Efficient Subwindow Search (ESS) [2]. prediction function for object localization. Note that high-level information is available only for training but not testing in this framework. Our algorithm employs an efficient branch-and-bound loss-augmented subwindow search procedure to perform the inference by a joint optimization in original and privileged spaces during training. Since the additional information is not used in testing, the inference in testing phase is the same as the standard Structural SVM (SSVM) case. We evaluate our method by learning to localize birds in the Caltech-UCSD Birds 200-2011 (CUB-2011) dataset [5] and exploiting attributes and segmentation masks as privileged information in addition to standard visual features. The main contributions of our work are as follows: • We introduce a novel framework for object localization exploiting privileged information that is not required or needed to be inferred at test time. • We formulate an SSVM+ framework, where an alternating loss-augmented inference procedure for efficient subwindow search is incorporated to handle the privileged information together with the conventional visual features. • Performance gains in localization and classification are achieved, especially with small training datasets. Methods that exploit additional information have been discussed to improve models for image classification or search in the context of transfer learning [6, 7], learning with side information [8, 9, 10] and domain adaptation [11], where underlying techniques rely on pair-wise constraints [8], multiple kernels [9] or metric learning [9]. Zero-shot learning is an extreme framework, where the models for unseen classes are constructed even without training data [12, 13]. Recent works often rely on natural language processing techniques to handle pure textual description [14, 15]. Standard learning algorithms require many data to construct a robust model while zero-shot learning does not need any training examples. LUPI framework is in the middle of traditional data-driven learning and zero-shot learning since it aims to learn a good model with a small number of training data by taking advantage of privileged information available at training time. Privileged information has been considered in face recognition [16], facial feature detection [17], and event recognition [18], but such works are still uncommon. Our work applies the LUPI framework to an object localization problem based on SSVM. The use of SSVMs for object localization is originally investigated by [1]. More recently, [19, 20] employ SSVM as part of their localization procedure, however none of them incorporate privileged information or similar idea. Recently, [21] presented the potential benefit of SVM+ in object recognition task. The rest of this paper is organized as follows. We first review the LUPI framework and SSVM in Section 2, and our SSVM+ formulation for object localization is presented in Section 3. The performance of our object localization algorithm is evaluated in Section 4. 2 2 Background 2.1 Learning Using Privileged Information The LUPI paradigm [3, 4, 22, 23] is a framework for incorporating additional information during training that is not available at test time. The inclusion of such information is exploited to find a better model, which yields lower generalization error. Contrary to classical supervised learning, where pairs of data are provided (x1, y1), . . . , (xn, yn), xi ∈X, yi ∈{−1, 1}, in the LUPI paradigm additional information x∗∈X ∗is provided with each training example as well, i.e., (x1, x∗ 1, y1), . . . , (xn, x∗ n, yn), xi ∈X, x∗ i ∈X ∗, yi ∈{−1, 1}. This information is, however, not required during testing. In both learning paradigms, the task is then to find among a collection of functions the one that best approximates the underlying decision function from the given data. Specifically, we formulate object localization within a LUPI framework as learning a pair of functions h : X 7→Y and φ : X ∗7→Y jointly, where only h is used for prediction. These functions, for example, map the space of images and attributes to the space of bounding box coordinates Y. The decision function h and the correcting function φ depend on each other by the following relation, ∀1 ≤i ≤n, ℓX (h(xi), yi) ≤ℓX ∗(φ(x∗ i ), yi), (1) where ℓX and ℓX ∗denote the empirical loss functions on the visual (X) and the privileged space (X ∗), respectively. This inequality is inspired by the LUPI paradigm [3, 4, 22, 23], where for all training examples the model h is always corrected to have a smaller loss on data than the model φ on privileged information. The constraint in Eq. (1) is meaningful when we assume that, for the same number of training examples, the combination of visual and privileged information provides a space to learn a better model than visual information alone. To translate this general learning idea into practice, the SVM+ algorithm for binary classification has been developed [3, 4, 22]. The SVM+ algorithm replaces the slack variable ξ in the standard SVM formulation by a correcting function ξ = (⟨w∗, x∗⟩+ d), which estimates its values from the privileged information. This results in the following formulation, min w,w∗,b,b∗ 1 2∥w∥2 2 + γ 2 ∥w∗∥2 2 + C n n X i=1 (⟨w∗, x∗ i ⟩+ b∗) | {z } ξi , (2) s.t. yi(⟨w, xi⟩+ b) ≥1 −(⟨w∗, x∗ i ⟩+ b∗) | {z } ξi , (⟨w∗, x∗ i ⟩+ b∗) | {z } ξi ≥0, ∀1 ≤i ≤n, where the terms w∗, x∗and b∗play the same role as w, x and b in the classical SVM, however within the new correcting space X ∗. Furthermore, γ denotes a regularization parameter for w∗. It is important to observe that the weight vector w depends not only on x but also on x∗. For this reason the function that replaces the slack ξ is called the correcting function. As privileged information is only used to estimate the values of the slacks, it is required only during training but not during testing. Theoretical analysis [4] shows that the bound on the convergence rate of the above SVM+ algorithm could substantially improve upon standard SVM if suitable privileged information is used. 2.2 Structural SVM (SSVM) SSVMs discriminatively learn a weight vector w for a scoring function f : X × Y 7→R over the set of training input/output pairs. Once learned, the prediction function h is obtained by maximizing f over all possible y ∈Y as follows: ˆy = h(x) = arg max y∈Y f(x, y) = arg max y∈Y ⟨w, Ψ(x, y)⟩, (3) where Ψ : X × Y →Rd is the joint feature map that models the relationship between input x and structured output y. To learn the weight vector w, the following optimization problem (marginrescaling) then needs to be solved: min w,ξ 1 2∥w∥2 + C n n X i=1 ξi, (4) s.t. ⟨w, δΨi(y)⟩≥∆(yi, y) −ξi 1 ≤i ≤n, ∀y ∈Y, 3 where δΨi(y) ≡Ψ(xi, yi)−Ψ(xi, y), and ∆(yi, y) is a task-specific loss that measures the quality of the prediction y with respect to the ground-truth yi. To obtain a prediction, we need to maximize Eq. (3) over the response variable y for a given input x. SSVMs are a general method for solving a variety of prediction tasks. For each application, the joint feature map Ψ, the loss function ∆and an efficient loss-augmented inference technique need to be customized. 3 Object Localization with Privileged Information We deal with object localization with privileged information: given a set of training images of objects, their locations and their attribute and segmentation information, we want to learn a function to localize objects of interest in yet unseen images. Unlike existing methods, our learned function does not need explicit or even inferred attribute and segmentation information during prediction. 3.1 Structural SVM with Privileged Information (SSVM+) We extend the above structured prediction problem to exploit privileged information. Recollecting Eq. (1), to learn the pair of interdependent functions h and φ, we learn to predict a structure y based on a training set of triplets, (x1, x∗ 1, y1), . . . , (xn, x∗ n, yn), xi ∈X, x∗ i ∈X ∗, yi ∈Y, where X corresponds to various visual features, X ∗to attributes or segmentations, and Y is the space of all possible bounding boxes. Once learned, only the function h is used for prediction. It is obtained by maximizing the learned function over all possible joint features based on input x ∈X and output y ∈Y as in Eq. (3), identically to standard SSVMs. On the other hand, to jointly learn h and φ, subject to the constraint in Eq. (1), we need to extend the SSVM framework substantially. The functions h and φ are characterized by the parameter vectors w and w∗, respectively as h(x) = arg max y∈Y ⟨w, Ψ(x, y)⟩and φ(x∗) = arg max y∗∈Y ⟨w∗, Ψ(x∗, y∗)⟩. (5) To learn the weight vectors w and w∗simultaneously, we propose a novel max-margin structured prediction framework called SSVM+ that incorporates the constraint in Eq. (1) and hence learns two models jointly as follows: min w,w∗,ξ 1 2∥w∥2 + γ 2 ∥w∗∥2 + C n n X i=1 ξi, (6) s.t. ⟨w, δΨi(y)⟩+⟨w∗, δΨ∗ i (y∗)⟩≥¯∆(yi, y, y∗) −ξi ∀1 ≤i ≤n, ∀y, y∗∈Y. where δΨ∗ i (y∗) ≡Ψ∗(x∗ i , yi) −Ψ∗(x∗ i , y∗) and the inequality in Eq. (1) is introduced via a surrogate task-specific loss ¯∆derived from [23]. This surrogate loss is defined as ¯∆(yi, y, y∗) = 1 ρ∆∗(yi, y∗) + [∆(yi, y) −∆∗(yi, y∗)]+, (7) where [t]+ = max(t, 0) and ρ > 0 is a penalization parameter corresponding to the constraint in Eq. (1), and task-specific loss functions ∆and ∆∗are defined in Section 3.3. Through this surrogate loss, we can apply the inequality in Eq. (1) within the ordinary max-margin optimization framework. Our framework enforces that the model learned on attributes and segmentations (w∗) always corrects the model trained on visual features (w). This results in a model with better generalization on visual features alone. Similar to SSVMs, we can tractably deal with the exponential number of possible constraints present in our problem via loss-augmented inference and optimization methods such as the cutting plane algorithm [24] or the more recent block-coordinate Frank Wolfe method [25]. Pseudocode for solving Eq. (6) using the the cutting plane method is presented in Algorithm 1. Our formulation has a general form that follows the SSVM framework. This means that Eq. (6) is independent of the definitions of joint feature map, task-specific loss and loss-augmented inference. We can therefore apply our method to a variety of other problems in addition to object localization. All that is required is the definition of the three problem specific components, which are also required in the standard SSVMs. As will be shown later, only the loss-augmented inference step becomes harder compared to SSVMs due to the inclusion of privileged information. 4 Algorithm 1 Cutting plane method for solving Eq. (6) 1: Input: (x1, x∗ 1, y1), . . . , (xn, x∗ n, yn), C, ρ, γ, ϵ 2: Si ←∅for all i = 1, . . . , n 3: repeat 4: for i = 1, . . . , n do 5: SET-UP SURROGATE TASK-SPECIFIC LOSS (EQ. (7)) 6: ¯∆(yi, y, y∗) = 1 ρ∆∗(yi, y∗) + [∆(yi, y) −∆∗(yi, y∗)]+ 7: SET-UP COST FUNCTION (EQ. (12)) 8: H(y, y∗) = ¯∆(yi, y, y∗) −⟨w, δΨi(y)⟩−⟨w∗, δΨ∗ i (y∗)⟩ 9: FIND CUTTING PLANE 10: (ˆy, ˆy∗) = arg maxy,y∗∈Y H(y, y∗) 11: FIND VALUE OF CURRENT SLACK 12: ξi = max{0, maxy,y∗∈Si H(y, y∗)} 13: if H(ˆy, ˆy∗) > ξi + ϵ then 14: ADD CONSTRAINT TO WORKING SET 15: Si ←Si ∪{(ˆy, ˆy∗)} 16: (w, w∗) ←optimize Eq. (6) over ∪iSi. 17: end if 18: end for 19: until no Si has changed during iteration 3.2 Joint Feature Map Our extended structured output regressor, SSVM+, estimates bounding box coordinates within target images by considering all possible bounding boxes. The structured output space is defined as Y ≡ {(θ, t, l, b, r) | θ ∈{+1, −1}, (t, l, b, r) ∈R4}, where θ denotes the presence/absence of an object and (t, l, b, r) correspond to coordinates of the top, left, bottom, and right corners of a bounding box, respectively. To model the relationship between input and output variables, we define a joint feature map, encoding features in x to their bounding boxes defined by y. This is modeled as Ψ(xi, y) = xi|y, (8) where x|y denotes the region of an image inside a bounding box with coordinates y. Identically, for the privileged space, we define another joint feature map, which instead of on visual features, it operates on the space of attributes aided by segmentation information as Ψ∗(x∗ i , y∗) = x∗ i |y∗. (9) The definition of the joint feature map is problem specific, and we follow the method in [1] proposed for object localization. Implementation details about both joint feature maps are described in Section 4.2 3.3 Task-Specific Loss To measure the level of discrepancy between the predicted output y and the true structured label yi, we need to define a loss function that accurately measures such a level of disagreement. In our object localization problem, the following task-specific loss, based on the Pascal VOC overlap ratio [1], is employed in both spaces, ∆(yi, y) = ( 1 −area(yi∩y) area(yi∪y) if yiθ = yθ = 1 1 −( 1 2(yiθyθ + 1)) otherwise, (10) where yiθ ∈{+1, −1} denotes the presence (+1) or absence (−1) of an object in the i-th image. In the case yiθ = −1, Ψ(x|y) = 0, where 0 is an all zero vector. The loss is 0 when bounding boxes defined by yi and y are identical, and equal to 1 when they are disjoint or yiθ ̸= yθ. 3.4 Loss-Augmented Inference Due to the exponential number of constraints that arise during learning of Eq. (6) and the possibly very large search space Y dealt with during prediction, we require an efficient inference technique, which may differ in training and testing in the SSVM+ framework. 5 3.4.1 Prediction The goal is to find the best bounding box given the learned weight vector w and the visual feature x. Privileged information is not available at testing time, and inference is performed on visual features only. Therefore, the same maximization problem as in standard SSVMs needs to be solved during prediction, which is given by h(x) = arg max y∈Y ⟨w, Ψ(x, y)⟩. (11) This maximization problem is over the space of bounding box coordinates. However, this problem involves a very large search space and therefore cannot be solved exhaustively. In the object localization task, the Efficient Subwindow Search (ESS) algorithm [2] is employed to solve the optimization problem efficiently. 3.4.2 Learning Compared to the inference problem required during the prediction step shown in Eq. (11), the optimization of our main objective during training involves a more complex inference procedure. We need to perform the following maximization with the surrogate loss and an additional term corresponding to the privileged space during an iterative procedure: (ˆy, ˆy∗) = arg max y,y∗∈Y ¯∆(yi, y, y∗) −⟨w, δΨi(y)⟩−⟨w∗, δΨ∗ i (y∗)⟩ = arg max y,y∗∈Y ¯∆(yi, y, y∗) + ⟨w, Ψ(xi, y)⟩+ ⟨w∗, Ψ∗(x∗ i , y∗)⟩. (12) Note that ⟨w, Ψ(xi, yi)⟩and ⟨w∗, Ψ∗(x∗ i , yi)⟩are constants in Eq. (12) and do not affect the optimization. The problem in Eq. (12), called loss-augmented inference, is required during each iteration of the cutting plane method, which is used for learning the functions h and φ and hence the weight vectors w and w∗. We adopt an alternating approach for the inference, where we first solve for y∗in the privileged space given the fixed solution in the original space yc arg max y∗∈Y ¯∆(yi, yc, y∗) + ⟨w∗, Ψ∗(x∗ i , y∗)⟩ (13) and subsequently perform optimization in the original space while fixing y∗ c arg max y∈Y ¯∆(yi, y, y∗ c) + ⟨w, Ψ(xi, y)⟩. (14) These two sub-procedures in Eq. (13) and (14) are repeated until convergence, and we obtain the final solutions w and w∗. In the object localization task, both problems are solved by ESS [2], a branch-and-bound optimization technique, for which it is essential to derive upper bounds of the above objective functions over a set of rectangles from Y. Here we derive the upper bounds of only the surrogate loss terms in Eq. (7); the derivation for the other terms can be found in [2]. When the solution in the privileged space is fixed, we need to consider the upper bound of only [∆−∆∗]+ to obtain the upper bound of the surrogate loss. Since [∆−∆∗]+ is a monotonically increasing function of ∆, its upper bound is derived directly from the upper bound of ∆. Specifically, the upper bound of ∆is given by ∆= 1 −area(yi ∩y) area(yi ∪y) ≤1 −miny∈Y area(yi ∩y) maxy∈Y area(yi ∪y), (15) and the upper bound of the surrogate loss with a fixed ∆∗is given by [∆−∆∗]+ ≤ 1 −miny∈Y area(yi ∩y) maxy∈Y area(yi ∪y) −∆∗ + . (16) When the original space is fixed, the problem is not straightforward since the surrogate loss becomes a V-shaped function with ρ > 1. In this case, we need to check outputs of the function at both upper 6 and lower bounds of ∆∗. The upper bound of ∆∗is derived identically to that of ∆, and the lower bound of ∆∗is given by ∆∗= 1 −area(yi ∩y∗) area(yi ∪y∗) ≥1 −maxy∗∈Y area(yi ∩y∗) miny∗∈Y area(yi ∪y∗) . (17) Let ∆∗ u and ∆∗ l be the upper and lower bounds of ∆∗, respectively. Then the upper bound of the surrogate loss with a fixed ∆is given by 1 ρ∆∗+ [∆−∆∗]+ ≤max 1 ρ∆∗ u + [∆−∆∗ u]+ , 1 ρ∆∗ l + [∆−∆∗ l ]+ . (18) By identifying the bounds of the surrogate loss as in Eq. (17) and (18), we can optimize the objective function in Eq. (12) through the alternating procedure based on the standard ESS algorithm. 4 Experiments 4.1 Dataset Empirical evaluation of our method is performed on the Caltech-UCSD Birds 2011 (CUB-2011) [5] fine-grained categorization dataset. It contains 200 categories of different species of birds. The location of each bird is specified using a bounding box. In addition, a large collection of privileged information is provided in the form of 15 different part annotations, 312 attributes and segmentation masks, manually labeled in each image by human annotators. Each category contains 30 training images and around 30 testing images. 4.2 Visual and Privileged Feature Extraction Our feature descriptor in visual space adopts the bag-of-visual-words model based on Speeded Up Robust Features (SURF) [26], which is almost identical to [2]. The dimensionality of visual feature descriptors is 3,000. We additionally employ attributes and segmentation masks as privileged information. The information about attributes is described by a 312 dimensional vector, whose element corresponds to each attribute and which has a binary value depending on its visibility and relevance. We use segmentation information to inpaint segmentation masks into each image, which results in an image containing the original background pixels with uniform foreground pixels. Subsequently, we extract the 3,000-dimensional feature descriptor based on the same bag-of-visual-words model as in the visual space. The intuition behind this approach is to generate a set of features that provide a guaranteed strong response in the foreground region. This response is to be stronger than in the original space, hence allowing for easier localization in the privileged space. For each sub-window, we create a histogram based on the presence of attributes and the frequency of the privileged codewords corresponding to the augmented visual space. 4.3 Evaluation To evaluate our SSVM+ algorithm, we compare it against the original SSVM localization method by Blaschko and Lampert [1] in several training scenarios. In all experiments we tune the hyperparameters C, λ and ρ on a 4×4×4 space spanning values [2−8, ..., 25]. For SSVM, one dimension of the search space corresponding to the parameter C is searched. We first investigate the influence of small training sample sizes on localization performance. For this setting, we loosely adopt the experimental setup of [27]. For training, we focus on 14 bird categories corresponding to 2 major bird groups. We train four different models, each trained on a distinctive number of training images, namely nc = {1, 5, 10, 20} images per class, resulting in n = {14, 70, 140, 280} training images, respectively. Additionally, we train a model on n = 1000 images, corresponding to 100 bird classes, each with 10 training images. As a validation set, 500 training images chosen at random from categories other than the ones used for training are used. For testing, we use all testing images of the entire CUB-2011 dataset. Table 1 presents results of this experiment. In all cases, our method outperforms the SSVM method in both average overlap as well as average detection (PASCAL VOC overlap ratio > 50%). This implies that for 7 Table 1: Comparison between our SSVM+ and the standard SSVM [1] by varying the number of classes and training images. (A) OVERLAP (B) DETECTION # training images 14 70 140 280 1000 14 70 140 280 1000 SSVM [1] 38.2 43.8 42.3 44.9 48.1 25.9 37.3 34.3 39.8 46.2 SSVM+ 41.3 45.7 45.8 46.9 49.0 32.6 42.4 41.5 43.3 48.1 DIFF. +3.1 +1.9 +3.5 +2.0 +0.9 +6.7 +5.1 +7.2 +3.5 +1.9 2YHUODSUDWLR #1XGTNCRTCVKQ 6690 6690 GLII 7KHQXPEHURIGHWHFWLRQQ $&GVGEVKQP 6690 6690 GLII Figure 2: Comparison results of average overlap (A) and detection results (B) between our structured learning with privileged information (SSVM+) and the standard structured learning (SSVM) on 100 classes of the CUB-2011 dataset. The bird classes aligned in x-axis are sorted by the differences of two methods shown in black area in a non-increasing order. the same number of training examples, our method consistently converges to a model with better generalization performance than SSVM. A previously observed trend [4, 23] of decreasing benefit of privileged information with increasing training set sizes is also apparent here. To evaluate the benefit of SSVM+ in more depth, we illustrate average overlap and detection performance on all the 100 classes in Figure 2, where 10 images per class are used for training with 14 classes (n = 140). In most of bird classes, SSVM+ shows relatively better performance in both overlap ratio and detection rate. Note that each class typically has 30 testing images but some classes have as little as 18 images. Average overlap ratio is 45.8% and average detection is 12.1 (41.5%). 5 Discussion We presented a structured prediction algorithm for object localization based on SSVM with privileged information. Our algorithm is the first method for incorporating privileged information within a structured prediction framework. Our method allows the use of various types of additional information during training to improve generalization performance at testing time. We applied our proposed method to an object localization problem, which is solved by a novel structural SVM formulation using privileged information. We employed an alternating loss-augmented inference procedure to handle the term in the objective function corresponding to privileged information. We applied the proposed algorithm to the Caltech-UCSD Birds 200-2011 dataset and obtained encouraging results, suggesting the potential benefit of exploiting additional information that is available during training only. Unfortunately, the benefit of privileged information tends to reduce as the number of training examples increases; our SSVM+ framework would be particularly useful when there exist only a few training data or annotation cost is very high. Acknowledgement This work was supported partly by ICT R&D program of MSIP/IITP [14-824-09-006; 14-824-09014] and IT R&D Program of MKE/KEIT (10040246). 8 References [1] Matthew B. Blaschko and Christoph H. Lampert. Learning to localize objects with structured output regression. In ECCV, pages 2–15, 2008. [2] Christoph H. Lampert, Matthew B. Blaschko, and Thomas Hofmann. Efficient subwindow search: A branch and bound framework for object localization. TPAMI, 31(12):2129–2142, 2009. [3] Vladimir Vapnik, Akshay Vashist, and Natalya Pavlovitch. Learning using hidden information: Masterclass learning. In NATO Workshop on Mining Massive Data Sets for Security, pages 3–14, 2008. [4] Vladimir Vapnik and Akshay Vashist. A new learning paradigm: Learning using privileged information. Neural Networks, 22(5-6):544–557, 2009. [5] Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The Caltech-UCSD Birds-200-2011 Dataset. Technical report, California Institute of Technology, 2011. [6] Lixin Duan, Dong Xu, Ivor W. Tsang, and Jiebo Luo. Visual event recognition in videos by learning from web data. TPAMI, 34(9):1667–1680, 2012. [7] Lixin Duan, Ivor W. Tsang, and Dong Xu. Domain transfer multiple kernel learning. TPAMI, 34(3):465– 479, 2012. [8] Qiang Chen, Zheng Song, Yang Hua, Zhongyang Huang, and Shuicheng Yan. Hierarchical matching with side information for image classification. In CVPR, pages 3426–3433, 2012. [9] Hao Xia, Steven C.H. Hoi, Rong Jin, and Peilin Zhao. Online multiple kernel similarity learning for visual search. TPAMI, 36(3):536–549, 2013. [10] Gang Wang, David Forsyth, and Derek Hoiem. Improved object categorization and detection using comparative object similarity. TPAMI, 35(10):2442–2453, 2013. [11] Wen Li, Lixin Duan, Dong Xu, and Ivor W. Tsang. Learning with augmented features for supervised and semi-supervised heterogeneous domain adaptation. TPAMI, 36(6):1134–11148, 2013. [12] Christoph H. Lampert, Hannes Nickisch, and Stefan Harmeling. Learning to detect unseen object classes by between-class attribute transfer. In CVPR, 2009. [13] Ali Farhadi, Ian Endres, Derek Hoiem, and David Forsyth. Describing objects by their attributes. In CVPR, 2009. [14] Mohamed Elhoseiny, Babak Saleh, and Ahmed Elgammal. Write a classifier: Zero-shot learning using purely textual descriptions. In ICCV, 2013. [15] Richard Socher, Milind Ganjoo, Christopher D. Manning, and Andrew Y. Ng. Zero-shot learning through cross-modal transfer. In NIPS, pages 935–943, 2013. [16] Lior Wolf and Noga Levy. The svm-minus similarity score for video face recognition. In CVPR, 2013. [17] Heng Yang and Ioannis Patras. Privileged information-based conditional regression forest for facial feature detection. In IEEE FG, pages 1–6, 2013. [18] Xiaoyang Wang and Qiang Ji. A novel probabilistic approach utilizing clip attribute as hidden knowledge for event recognition. In ICPR, pages 3382–3385, 2012. [19] Cezar Ionescu, Liefeng Bo, and Cristian Sminchisescu. Structural svm for visual localization and continuous state estimation. In ICCV, pages 1157–1164, 2009. [20] Qieyun Dai and Derek Hoiem. Learning to localize detected objects. In CVPR, pages 3322–3329, 2012. [21] Viktoriia Sharmanska, Novi Quadrianto, and Christoph H. Lampert. Learning to rank using privileged information. In ICCV, pages 825–832, 2013. [22] Vladimir Vapnik. Estimation of Dependences Based on Empirical Data. Springer, 2006. [23] Dmitry Pechyony and Vladimir Vapnik. On the theory of learning with privileged information. NIPS, pages 1894–1902, 2010. [24] Ioannis Tsochantaridis, Thorsten Joachims, Thomas Hofmann, and Yasemin Altun. Large margin methods for structured and interdependent output variables. JMLR, 6:1453–1484, 2005. [25] Simon Lacoste-Julien, Martin Jaggi, Mark Schmidt, and Patrick Pletscher. Block-coordinate frank-wolfe optimization for structural svms. In ICML, 2013. [26] Herbert Bay, Andreas Ess, Tinne Tuytelaars, and Luc Van Gool. Speeded-up robust features (SURF). CVIU, 110(3):346–359, 2008. [27] Ryan Farrell, Om Oza, Ning Zhang, Vlad I. Morariu, Trevor Darrell, and Larry S. Davis. Birdlets: Subordinate categorization using volumetric primitives and pose-normalized appearance. In ICCV, pages 161–168, 2011. 9
|
2014
|
165
|
5,253
|
Robust Logistic Regression and Classification Jiashi Feng EECS Department & ICSI UC Berkeley jshfeng@berkeley.edu Huan Xu ME Department National University of Singapore mpexuh@nus.edu.sg Shie Mannor EE Department Technion shie@ee.technion.ac.il Shuicheng Yan ECE Department National University of Singapore eleyans@nus.edu.sg Abstract We consider logistic regression with arbitrary outliers in the covariate matrix. We propose a new robust logistic regression algorithm, called RoLR, that estimates the parameter through a simple linear programming procedure. We prove that RoLR is robust to a constant fraction of adversarial outliers. To the best of our knowledge, this is the first result on estimating logistic regression model when the covariate matrix is corrupted with any performance guarantees. Besides regression, we apply RoLR to solving binary classification problems where a fraction of training samples are corrupted. 1 Introduction Logistic regression (LR) is a standard probabilistic statistical classification model that has been extensively used across disciplines such as computer vision, marketing, social sciences, to name a few. Different from linear regression, the outcome of LR on one sample is the probability that it is positive or negative, where the probability depends on a linear measure of the sample. Therefore, LR is actually widely used for classification. More formally, for a sample xi ∈Rp whose label is denoted as yi, the probability of yi being positive is predicted to be P{yi = +1} = 1 1+e−β⊤xi , given the LR model parameter β. In order to obtain a parameter that performs well, often a set of labeled samples {(x1, y1), . . . , (xn, yn)} are collected to learn the LR parameter β which maximizes the induced likelihood function over the training samples. However, in practice, the training samples x1, . . . , xn are usually noisy and some of them may even contain adversarial corruptions. Here by “adversarial”, we mean that the corruptions can be arbitrary, unbounded and are not from any specific distribution. For example, in the image/video classification task, some images or videos may be corrupted unexpectedly due to the error of sensors or the severe occlusions on the contained objects. Those corrupted samples, which are called outliers, can skew the parameter estimation severely and hence destroy the performance of LR. To see the sensitiveness of LR to outliers more intuitively, consider a simple example where all the samples xi’s are from one-dimensional space R, as shown in Figure 1. Only using the inlier samples provides a correct LR parameter (we here show the induced function curve) which explains the inliers well. However, when only one sample is corrupted (which is originally negative but now closer to the positive samples), the resulted regression curve is distracted far away from the ground truth one and the label predictions on the concerned inliers are completely wrong. This demonstrates that LR is indeed fragile to sample corruptions. More rigorously, the non-robustness of LR can be shown via calculating its influence function [7] (detailed in the supplementary material). 1 −5 −4 −3 −2 −1 0 1 2 3 4 5 0 0.2 0.4 0.6 0.8 1 inlier outlier Figure 1: The estimated logistic regression curve (red solid) is far away from the correct one (blue dashed) due to the existence of just one outlier (red circle). As Figure 1 demonstrates, the maximal-likelihood estimate of LR is extremely sensitive to the presence of anomalous data in the sample. Pregibon also observed this non-robustness of LR in [14]. To solve this important issue of LR, Pregibon [14], Cook and Weisberg [4] and Johnson [9] proposed procedures to identify observations which are influential for estimating β based on certain outlyingness measure. Stefanski et al. [16, 10] and Bianco et al. [2] also proposed robust estimators which, however, require to robustly estimating the covariate matrix or boundedness on the outliers. Moreover, the breakdown point1 of those methods is generally inversely proportional to the sample dimensionality and diminishes rapidly for high-dimensional samples. We propose a new robust logistic regression algorithm, called RoLR, which optimizes a robustified linear correlation between response y and linear measure ⟨β, x⟩via an efficient linear programmingbased procedure. We demonstrate that the proposed RoLR achieves robustness to arbitrarily covariate corruptions. Even when a constant fraction of the training samples are corrupted, RoLR is still able to learn the LR parameter with a non-trivial upper bound on the error. Besides this theoretical guarantee of RoLR on the parameter estimation, we also provide the empirical and population risks bounds for RoLR. Moreover, RoLR only needs to solve a linear programming problem and thus is scalable to large-scale data sets, in sharp contrast to previous LR optimization algorithms which typically resort to (computationally expensive) iterative reweighted method [11]. The proposed RoLR can be easily adapted to solving binary classification problems where corrupted training samples are present. We also provide theoretical classification performance guarantee for RoLR. Due to the space limitation, we defer all the proofs to the supplementary material. 2 Related Works Several previous works have investigated multiple approaches to robustify the logistic regression (LR) [15, 13, 17, 16, 10]. The majority of them are M-estimator based: minimizing a complicated and more robust loss function than the standard loss function (negative log-likelihood) of LR. For example, Pregiobon [15] proposed the following M-estimator: ˆβ = arg min β n X i=1 ρ(ℓi(β)), where ℓi(·) is the negative log-likelihood of the ith sample xi and ρ(·) is a Huber type function [8] such as ρ(t) = t, if t ≤c, 2 √ tc −c, if t > c, with c a positive parameter. However, the result from such estimator is not robust to outliers with high leverage covariates as shown in [5]. 1It is defined as the percentage of corrupted points that can make the output of an algorithm arbitrarily bad. 2 Recently, Ding et al [6] introduced the T-logistic regression as a robust alternative to the standard LR, which replaces the exponential distribution in LR by t-exponential distribution family. However, T-logistic regression only guarantees that the output parameter converges to a local optimum of the loss function instead of converging to the ground truth parameter. Our work is largely inspired by following two recent works [3, 13] on robust sparse regression. In [3], Chen et al. proposed to replace the standard vector inner product by a trimmed one, and obtained a novel linear regression algorithm which is robust to unbounded covariate corruptions. In this work, we also utilize this simple yet powerful operation to achieve robustness. In [13], a convex programming method for estimating the sparse parameters of logistic regression model is proposed: max β m X i=1 yi⟨xi, β⟩, s.t. ∥β∥1 ≤√s, ∥β∥≤1, where s is the sparseness prior parameter on β. However, this method is not robust to corrupted covariate matrix. Few or even one corrupted sample may dominate the correlation in the objective function and yield arbitrarily bad estimations. In this work, we propose a robust algorithm to remedy this issue. 3 Robust Logistic Regression 3.1 Problem Setup We consider the problem of logistic regression (LR). Let Sp−1 denote the unit sphere and Bp 2 denote the Euclidean unit ball in Rp. Let β∗be the groundtruth parameter of the LR model. We assume the training samples are covariate-response pairs {(xi, yi)}n+n1 i=1 ⊂Rp × {−1, +1}, which, if not corrupted, would obey the following LR model: P{yi = +1} = τ(⟨β∗, xi⟩+ vi), (1) where the function τ(·) is defined as: τ(z) = 1 1+e−z . The additive noise vi ∼N(0, σ2 e) is an i.i.d. Gaussian random variable with zero mean and variance of σ2 e. In particular, when we consider the noiseless case, we assume σ2 e = 0. Since LR only depends on ⟨β∗, xi⟩, we can always scale the samples xi to make the magnitude of β∗less than 1. Thus, without loss of generality, we assume that β∗∈Sp−1. Out of the n + n1 samples, a constant number (n1) of the samples may be adversarially corrupted, and we make no assumptions on these outliers. Throughout the paper, we use λ ≜n1 n to denote the outlier fraction. We call the remaining n non-corrupted samples “authentic” samples, which obey the following standard sub-Gaussian design [12, 3]. Definition 1 (Sub-Gaussian design). We say that a random matrix X = [x1, . . . , xn] ∈Rp×n is sub-Gaussian with parameter ( 1 nΣx, 1 nσ2 x) if: (1) each column xi ∈Rp is sampled independently from a zero-mean distribution with covariance 1 nΣx, and (2) for any unit vector u ∈Rp, the random variable u⊤xi is sub-Gaussian with parameter2 1 √nσx. The above sub-Gaussian random variables have several nice concentration properties, one of which is stated in the following Lemma [12]. Lemma 1 (Sub-Gaussian Concentration [12]). Let X1, . . . , Xn be n i.i.d. zero-mean subGaussian random variables with parameter σx/√n and variance at most σ2 x/n. Then we have Pn i=1 X2 i −σ2 x ≤c1σ2 x q log p n , with probability of at least 1 −p−2 for some absolute constant c1. Based on the above concentration property, we can obtain following bound on the magnitude of a collection of sub-Gaussian random variables [3]. Lemma 2. Suppose X1, . . . , Xn are n independent sub-Gaussian random variables with parameter σx/√n. Then we have maxi=1,...,n|Xi| ≤4σx p (log n + log p)/n with probability of at least 1 −p−2. 2Here, the parameter means the sub-Gaussian norm of the random variable Y , ∥Y ∥ψ2 = supq≥1 q−1/2(E|Y |q)1/q. 3 Also, this lemma provides a rough bound on the magnitude of inlier samples, and this bound serves as a threshold for pre-processing the samples in the following RoLR algorithm. 3.2 RoLR Algorithm We now proceed to introduce the details of the proposed Robust Logistic Regression (RoLR) algorithm. Basically, RoLR first removes the samples with overly large magnitude and then maximizes a trimmed correlation of the remained samples with the estimated LR model. The intuition behind the RoLR maximizing the trimmed correlation is: if the outliers have too large magnitude, they will not contribute to the correlation and thus not affect the LR parameter learning. Otherwise, they have bounded affect on the LR learning (which actually can be bounded by the inlier samples due to our adopting the trimmed statistic). Algorithm 1 gives the implementation details of RoLR. Algorithm 1 RoLR Input: Contaminated training samples {(x1, y1), . . . , (xn+n1, yn+n1)}, an upper bound on the number of outliers n1, number of inliers n and sample dimension p. Initialization: Set T = 4 p log p/n + log n/n. Preprocessing: Remove samples (xi, yi) whose magnitude satisfies ∥xi∥≥T. Solve the following linear programming problem (see Eqn. (3)): ˆβ = arg max β∈Bp 2 n X i=1 [y⟨β, x⟩](i). Output: ˆβ. Note that, within the RoLR algorithm, we need to optimize the following sorted statistic: max β∈Bp 2 n X i=1 [y⟨β, x⟩](i). (2) where [·](i) is a sorted statistic such that [z](1) ≤[z](2) ≤. . . ≤[z](n), and z denotes the involved variable. The problem in Eqn. (2) is equivalent to minimizing the summation of top n variables, which is a convex one and can be solved by an off-the-shelf solver (such as CVX). Here, we note that it can also be converted to the following linear programming problem (with a quadratic constraint), which enjoys higher computational efficiency. To see this, we first introduce auxiliary variables ti ∈{0, 1} as indicators of whether the corresponding terms yi⟨β, −xi⟩fall in the smallest n ones. Then, we write the problem in Eqn. (2) as max β∈Bp 2 min ti n+n1 X i=1 ti · yi⟨β, xi⟩, s.t. n+n1 X i=1 ti ≤n, 0 ≤ti ≤1. Here the constraints of Pn+n1 i=1 ti ≤n, 0 ≤ti ≤1 are from standard reformulation of Pn+n1 i=1 ti = n, ti ∈{0, 1}. Now, the above problem becomes a max-min linear programming. To decouple the variables β and ti, we turn to solving the dual form of the inner minimization problem. Let ν, and ξi be the Lagrange multipliers for the constraints Pn+n1 i=1 ti ≤n and ti ≤1 respectively. Then the dual form w.r.t. ti of the above problem is: max β,ν,ξi −ν · n − n+n1 X i=1 ξi, s.t. yi⟨β, xi⟩+ ν + ξi ≥0, β ∈Bp 2, ν ≥0, ξi ≥0. (3) Reformulating logistic regression into a linear programming problem as above significantly enhances the scalability of LR in handling large-scale datasets, a property very appealing in practice, since linear programming is known to be computationally efficient and has no problem dealing with up to 1 × 106 variables in a standard PC. 3.3 Performance Guarantee for RoLR In contrast to traditional LR algorithms, RoLR does not perform a maximal likelihood estimation. Instead, RoLR maximizes the correlation yi⟨β, xi⟩. This strategy reduces the computational complexity of LR, and more importantly enhances the robustness of the parameter estimation, using 4 the fact that the authentic samples usually have positive correlation between the yi and ⟨β, xi⟩, as described in the following lemma. Lemma 3. Fix β ∈Sp−1. Suppose that the sample (x, y) is generated by the model described in (1). The expectation of the product y⟨β, x⟩is computed as: Ey⟨β, x⟩= E sech2(g/2), where g ∈N(0, σ2 x + σ2 e) is a Gaussian random variable and σ2 e is the noise level in (1). Furthermore, the above expectation can be bounded as follows, ϕ+(σ2 e, σ2 x) ≤Ey⟨β, x⟩≤ϕ−(σ2 e, σ2 x). where ϕ+(σ2 e, σ2 x) and ϕ−(σ2 e, σ2 x) are positive. In particular, they can take the form of ϕ+(σ2 e, σ2 x) = σ2 x 3 sech2 1+σ2 e 2 and ϕ−(σ2 e, σ2 x) = σ2 x 3 + σ2 x 6 sech2 1+σ2 e 2 . The following lemma shows the difference of correlations is an effective surrogate for the difference of the LR parameters. Thus we can always minimize the difference of ∥ˆβ−β∗∥through maximizing P i yi⟨ˆβ, xi⟩. Lemma 4. Fix β ∈Sp−1 as the groundtruth parameter in (1) and β′ ∈Bp 2. Denote η = Ey⟨β, x⟩. Then Ey⟨β′, x⟩= η⟨β, β′⟩, and thus, E [y⟨β, x⟩−y⟨β′, x⟩] = η(1 −⟨β, β′⟩) ≥η 2∥β −β′∥2 2. Based on these two lemmas, along with some concentration properties of the inlier samples (shown in the supplementary material), we have the following performance guarantee of RoLR on LR model parameter recovery. Theorem 1 (RoLR for recovering LR parameter). Let λ ≜ n1 n be the outlier fraction, ˆβ be the output of Algorithm 1, and β∗be the ground truth parameter. Suppose that there are n authentic samples generated by the model described in (1). Then we have, with probability larger than 1 − 4 exp(−c2n/8), ∥ˆβ −β∗∥≤2λϕ−(σ2 e, σ2 x) ϕ+(σ2e, σ2x) + 2(λ + 4 + 5 √ λ) ϕ+(σ2e, σ2x) r p n + 8λ ϕ+(σ2e, σ2x)σ2 x r log p n + log n n . Here c2 is an absolute constant. Remark 1. To make the above results more explicit, we consider the asymptotic case where p/n → 0. Thus the above bounds become ∥ˆβ −β∗∥≤2λϕ−(σ2 e, σ2 x) ϕ+(σ2e, σ2x), which holds with probability larger than 1−4 exp(−c2n/8). In the noiseless case, i.e., σe = 0, and assuming σ2 x = 1, we have ϕ+(σ2 e) = 1 3 sech2 1 2 ≈0.2622 and ϕ−(σ2 e +1) = 1 3 + 1 6 sech2 1 2 ≈ 0.4644. The ratio is ϕ−/ϕ+ ≈1.7715. Thus the bound is simplified to: ∥ˆβ −β∗∥≲3.54λ. Recall that ˆβ, β∗∈Sp−1 and the maximal value of ∥ˆβ −β∗∥is 2. Thus, for the above result to be non-trivial, we need 3.54λ ≤2, namely λ ≤0.56. In other words, in the noiseless case, the RoLR is able to estimate the LR parameter with a non-trivial error bound (also known as a “breakdown point”) with up to 0.56/1.56 × 100% = 36% of the samples being outliers. 4 Empirical and Population Risk Bounds of RoLR Besides the parameter recovery, we are also concerned about the prediction performance of the estimated LR model in practice. The standard prediction loss function ℓ(·, ·) of LR is a non-negative and bounded function, and is defined as: ℓ((xi, yi), β) = 1 1 + exp{−yiβ⊤xi}. (4) 5 The goodness of an LR predictor β is measured by its population risk: R(β) = EP (X,Y )ℓ((x, y), β), where P(X, Y ) describes the joint distribution of covariate X and response Y . However, the population risk rarely can be calculated directly as the distribution P(X, Y ) is usually unknown. In practice, we often consider the empirical risk, which is calculated over the provided training samples as follows: Remp(β) = 1 n n X i=1 ℓ((xi, yi), β). Note that the empirical risk is computed only over the authentic samples, hence cannot be directly optimized when outliers exist. Based on the bound of ∥ˆβ−β∗∥provided in Theorem 1, we can easily obtain the following empirical risk bound for RoLR as the LR loss function given in Eqn. (4) is Lipschitz continuous. Corollary 1 (Bound on the empirical risk). Let ˆβ be the output of Algorithm 1, and β∗be the optimal parameter minimizing the empirical risk. Suppose that there are n authentic samples generated by the model described in (1). Define X ≜4σx p (log n + log p)/n. Then we have, with probability larger than 1 −4 exp(−c2n/8), the empirical risk of ˆβ is bounded by, Remp(ˆβ) −Remp(β∗) ≤ X ( 2λϕ−(σ2 e, σ2 x) ϕ+(σ2e, σ2x) + 2(λ + 4 + 5 √ λ) ϕ+(σ2e, σ2x) r p n + 8λσ2 x ϕ+(σ2e, σ2x) r log p n + log n n ) . Given the empirical risk bound, we can readily obtain the bound on the population risk by referring to standard generalization results in terms of various function class complexities. Some widely used complexity measures include the VC-dimension [18] and the Rademacher and Gaussian complexity [1]. Compared with the Rademacher complexity which is data dependent, the VC-dimension is more universal although the resulting generalization bound can be slightly loose. Here, we adopt the VC-dimension to measure the function complexity and obtain the following population risk bound. Corollary 2 (Bound on the population risk). Let ˆβ be the output of Algorithm 1, and β∗be the optimal parameter. Suppose the parameter space Sp−1 ∋β has finite VC dimension d. There are n authentic samples are generated by the model described in (1). Define X ≜4σx p (log n + log p)/n. Then we have, with high probability larger larger than 1 −4 exp(−c2n/8) −δ, the population risk of ˆβ is bounded by, R(ˆβ) −R(β∗) ≤X ( 2λϕ−(σ2 e, σ2 x) ϕ+(σ2e, σ2x) + 2(λ + 4 + 5 √ λ) ϕ+(σ2e, σ2x) r p n + 8λσ2 x ϕ+(σ2e, σ2x) r log p n + log n n +2c3 r d + ln(1/δ) n ) . Here both c2 and c3 are absolute constants. 5 Robust Binary Classification 5.1 Problem Setup Different from the sample generation model for LR, in the standard binary classification setting, the label yi of a sample xi is deterministically determined by the sign of the linear measure of the sample ⟨β∗, xi⟩. Namely, the samples are generated by the following model: yi = sign (⟨β∗, xi⟩+ vi) . (5) Here vi is a Gaussian noise as in Eqn. (1). Since yi is deterministically related to ⟨β∗, xi⟩, the expected correlation Ey⟨β, x⟩achieves the maximal value in this setup (ref. Lemma 5), which ensures that the RoLR also performs well for classification. We again assume that the training samples contain n authentic samples and at most n1 outliers. 6 5.2 Performance Guarantee for Robust Classification Lemma 5. Fix β ∈Sp−1. Suppose the sample (x, y) is generated by the model described in (5). The expectation of the product y⟨β, x⟩is computed as: Ey⟨β, x⟩= s 2σ4x π(σ2x + σ2v). Comparing the above result with the one in Lemma 3, here for the binary classification, we can exactly calculate the expectation of the correlation, and this expectation is always larger than that of the LR setting. The correlation depends on the signal-noise ratio σx/σe. In the noiseless case, σe = 0 and the expected correlation is σx p 2/π, which is well known as the half-normal distribution. Similarly to analyzing RoLR for LR, based on Lemma 5, we can obtain the following performance guarantee for RoLR in solving classification problems. Theorem 2. Let ˆβ be the output of Algorithm 1, and β∗be the optimal parameter minimizing the empirical risk. Suppose there are n authentic samples generated by the model described by (5). Then we have, with large probability larger than 1 −4 exp(−c2n/8), ∥ˆβ −β∗∥2 ≤2λ + 2(λ + 4 + 5 √ λ) s (σ2e + σ2x)πp 2σ4xn + 8λ r (σ2e + σ2x)π 2 r log p n + log n n . The proof of Theorem 2 is similar to that of Theorem 1. Also, similar to the LR case, based on the above parameter error bound, it is straightforward to obtain the empirical and population risk bounds of RoLR for classification. Due to the space limitation, here we only sketch how to obtain the risk bounds. For the classification problem, the most natural loss function is the 0 −1 loss. However, 0 −1 loss function is non-convex, non-smooth, and we cannot get a non-trivial function value bound in terms of ∥ˆβ −β∗∥as we did for the logistic loss function. Fortunately, several convex surrogate loss functions for 0−1 loss have been proposed and achieve good classification performance, which include the hinge loss, exponential loss and logistic loss. These loss functions are all Lipschitz continuous and thus we can bound their empirical and then population risks as for logistic regression. 6 Simulations In this section, we conduct simulations to verify the robustness of RoLR along with its applicability for robust binary classification. We compare RoLR with standard logistic regression which estimates the model parameter through maximizing the log-likelihood function. We randomly generated the samples according to the model in Eqn. (1) for the logistic regression problem. In particular, we first sample the model parameter β ∼N(0, Ip) and normalize it as β := β/∥β∥2. Here p is the dimension of the parameter, which is also the dimension of samples. The samples are drawn i.i.d. from xi ∼N(0, Σx) with Σx = Ip, and the Gaussian noise is sampled as vi ∼N(0, σe). Then, the sample label yi is generated according to P{yi = +1} = τ(⟨β, xi⟩+vi) for the LR case. For the classification case, the sample labels are generated by yi = sign(⟨β, xi⟩+vi) and additional nt = 1, 000 authentic samples are generated for testing. The entries of outliers xo are i.i.d. random variables from uniform distribution [−σo, σo] with σo = 10. The labels of outliers are generated by yo = sign(⟨−β, xo⟩). That is, outliers follow the model having opposite sign as inliers, which according to our experiment, is the most adversarial outlier model. The ratio of outliers over inliers is denoted as λ = n1/n, where n1 is the number of outliers and n is the number of inliers. We fix n = 1, 000 and the λ varies from 0 to 1.2, with a step of 0.1. We repeat the simulations under each outlier fraction setting for 10 times and plot the performance (including the average and the variance) of RoLR and ordinary LR versus the ratio of outliers to inliers in Figure 2. In particular, for the task of logistic regression, we measure the performance by the parameter prediction error ∥ˆβ −β∗∥. For classification, we use the classification error rate on test samples – #(ˆyi ̸= yi)/nt – as the performance measure. Here ˆyi = sign(ˆβ⊤xi) is the predicted label for sample xi and yi is the ground truth sample label. The results, shown in Figure 2, 7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 0 0.5 1 1.5 2 outlier to inliear ratio error: ||β−β*|| RoLR LR LR+P (a) Logistic regression 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 0 0.2 0.4 0.6 0.8 1 classification error outlier to inlier ratio RoLR Classification LR Classification (b) Classification Figure 2: Performance comparison between RoLR, ordinary LR and LR with the thresholding preprocessing as in RoLR (LR+P) for (a) regression parameter estimation and (b) classification, under the setting of σe = 0.5, σo = 10, p = 20 and n = 1, 000. The simulation is repeated for 10 times. clearly demonstrate that RoLR performs much better than standard LR for both tasks. Even when the outlier fraction is small (λ = 0.1), RoLR already outperforms LR with a large margin. From Figure 2(a), we observe that when λ ≥0.3, the parameter estimation error of LR reaches around 1.3, which is pretty unsatisfactory since simply outputting a trivial solution ˆβ = 0 has an error of 1 (recall ∥β∗∥2 = 1). In contrast, RoLR guarantees the estimation error to be around 0.5, even though λ = 0.8, i.e., around 45% of the samples are outliers. To see the role of preprocessing in RoLR, we also apply such preprocessing to LR and plot its performance as “LR+P” in the figure. It can be seen that the preprocessing step indeed helps remove certain outliers with large magnitudes. However, when the fraction of outliers increases to λ = 0.5, more outliers with smaller magnitudes than the pre-defined threshold enter the remained samples and increase the error of “LR+P” to be larger than 1. This demonstrates maximizing the correlation is more essential than the thresholding for the robustness gain of RoLR. From results for classification, shown in Figure 2(b), we observe that again from λ = 0.2, LR starts to breakdown. The classification error rate of LR achieves 0.8, which is even worse than random guess. In contrast, RoLR still achieves satisfactory classification performance with classification error rate around 0.4 even with λ →1. But when λ > 1, RoLR also breaks down as outliers dominate in the training samples. When there is no outliers, with the same inliers (n = 1×103 and p = 20), the error of LR in logistic regression estimation is 0.06 while the error of RoLR is 0.13. Such performance degradation in RoLR is due to that RoLR maximizes the linear correlation statistics instead of the likelihood as in LR in inferring the regression parameter. This is the price RoLR needs to pay for the robustness. We provide more investigations and also results for real large data in the supplementary material. 7 Conclusions We investigated the problem of logistic regression (LR) under a practical case where the covariate matrix is adversarially corrupted. Standard LR methods were shown to fail in this case. We proposed a novel LR method, RoLR, to solve this issue. We theoretically and experimentally demonstrated that RoLR is robust to the covariate corruptions. Moreover, we devised a linear programming algorithm to solve RoLR, which is computationally efficient and can scale to large problems. We further applied RoLR to successfully learn classifiers from corrupted training samples. Acknowledgments The work of H. Xu was partially supported by the Ministry of Education of Singapore through AcRF Tier Two grant R-265-000-443-112. The work of S. Mannor was partially funded by the Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI) and by the Israel Science Foundation (ISF under contract 920/12). 8 References [1] Peter L Bartlett and Shahar Mendelson. Rademacher and gaussian complexities: Risk bounds and structural results. The Journal of Machine Learning Research, 3:463–482, 2003. [2] Ana M Bianco and V´ıctor J Yohai. Robust estimation in the logistic regression model. Springer, 1996. [3] Yudong Chen, Constantine Caramanis, and Shie Mannor. Robust sparse regression under adversarial corruption. In ICML, 2013. [4] R Dennis Cook and Sanford Weisberg. Residuals and influence in regression. 1982. [5] JB Copas. Binary regression models for contaminated data. Journal of the Royal Statistical Society. Series B (Methodological), pages 225–265, 1988. [6] Nan Ding, SVN Vishwanathan, Manfred Warmuth, and Vasil S Denchev. T-logistic regression for binary and multiclass classification. Journal of Machine Learning Research, 5:1–55, 2013. [7] Frank R Hampel. The influence curve and its role in robust estimation. Journal of the American Statistical Association, 69(346):383–393, 1974. [8] Peter J Huber. Robust statistics. Springer, 2011. [9] Wesley Johnson. Influence measures for logistic regression: Another point of view. Biometrika, 72(1):59–65, 1985. [10] Hans R K¨unsch, Leonard A Stefanski, and Raymond J Carroll. Conditionally unbiased bounded-influence estimation in general regression models, with applications to generalized linear models. Journal of the American Statistical Association, 84(406):460–466, 1989. [11] Su-In Lee, Honglak Lee, Pieter Abbeel, and Andrew Y Ng. Efficient L1 regularized logistic regression. In AAAI, 2006. [12] Po-Ling Loh and Martin J Wainwright. High-dimensional regression with noisy and missing data: Provable guarantees with nonconvexity. Annals of Statistics, 40(3):1637, 2012. [13] Yaniv Plan and Roman Vershynin. Robust 1-bit compressed sensing and sparse logistic regression: A convex programming approach. Information Theory, IEEE Transactions on, 59(1):482–494, 2013. [14] Daryl Pregibon. Logistic regression diagnostics. The Annals of Statistics, pages 705–724, 1981. [15] Daryl Pregibon. Resistant fits for some commonly used logistic models with medical applications. Biometrics, pages 485–498, 1982. [16] Leonard A Stefanski, Raymond J Carroll, and David Ruppert. Optimally hounded score functions for generalized linear models with applications to logistic regression. Biometrika, 73(2):413–424, 1986. [17] Julie Tibshirani and Christopher D Manning. Robust logistic regression using shift parameters. arXiv preprint arXiv:1305.4987, 2013. [18] Vladimir N Vapnik and A Ya Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability & Its Applications, 16(2):264–280, 1971. 9
|
2014
|
166
|
5,254
|
Flexible Transfer Learning under Support and Model Shift Xuezhi Wang Computer Science Department Carnegie Mellon University xuezhiw@cs.cmu.edu Jeff Schneider Robotics Institute Carnegie Mellon University schneide@cs.cmu.edu Abstract Transfer learning algorithms are used when one has sufficient training data for one supervised learning task (the source/training domain) but only very limited training data for a second task (the target/test domain) that is similar but not identical to the first. Previous work on transfer learning has focused on relatively restricted settings, where specific parts of the model are considered to be carried over between tasks. Recent work on covariate shift focuses on matching the marginal distributions on observations X across domains. Similarly, work on target/conditional shift focuses on matching marginal distributions on labels Y and adjusting conditional distributions P(X|Y ), such that P(X) can be matched across domains. However, covariate shift assumes that the support of test P(X) is contained in the support of training P(X), i.e., the training set is richer than the test set. Target/conditional shift makes a similar assumption for P(Y ). Moreover, not much work on transfer learning has considered the case when a few labels in the test domain are available. Also little work has been done when all marginal and conditional distributions are allowed to change while the changes are smooth. In this paper, we consider a general case where both the support and the model change across domains. We transform both X and Y by a location-scale shift to achieve transfer between tasks. Since we allow more flexible transformations, the proposed method yields better results on both synthetic data and real-world data. 1 Introduction In a classical transfer learning setting, we have sufficient fully labeled data from the source domain (or the training domain) where we fully observe the data points Xtr, and all corresponding labels Y tr are known. On the other hand, we are given data points, Xte, from the target domain (or the test domain), but few or none of the corresponding labels, Y te, are given. The source and the target domains are related but not identical, thus the joint distributions, P(Xtr, Y tr) and P(Xte, Y te), are different across the two domains. Without any transfer learning, a statistical model learned from the source domain does not directly apply to the target domain. The use of transfer learning algorithms minimizes, or reduces the labeling work needed in the target domain. It learns and transfers a model based on the labeled data from the source domain and the data with few or no labels from the target domain, and should perform well on the unlabeled data in the target domain. Some realworld applications of transfer learning include adapting a classification model that is trained on some products to help learn classification models for other products [17], and learning a model on the medical data for one disease and transferring it to another disease. The real-world application we consider is an autonomous agriculture application where we want to manage the growth of grapes in a vineyard [3]. Recently, robots have been developed to take images of the crop throughout the growing season. When the product is weighed at harvest at the end of each season, the yield for each vine will be known. The measured yield can be used to learn a model 1 to predict yield from images. Farmers would like to know their yield early in the season so they can make better decisions on selling the produce or nurturing the growth. Acquiring training labels early in the season is very expensive because it requires a human to go out and manually estimate the yield. Ideally, we can apply a transfer-learning model which learns from previous years and/or on other grape varieties to minimize this manual yield estimation. Furthermore, if we decide that some of the vines have to be assessed manually to learn the model shift, a simultaneously applied active learning algorithm will tell us which vines should be measured manually such that the labeling cost is minimized. Finally, there are two different objectives of interest. To better nurture the growth we need an accurate estimate of the current yield of each vine. However, to make informed decisions about pre-selling an appropriate amount of the crops, only an estimate of the sum of the vine yields is needed. We call these problems active learning and active surveying respectively and they lead to different selection criteria. In this paper, we focus our attention on real-valued regression problems. We propose a transfer learning algorithm that allows both the support on X and Y , and the model P(Y |X) to change across the source and target domains. We assume only that the change is smooth as a function of X. In this way, more flexible transformations are allowed than mean-centering and variancescaling. Specifically, we build a Gaussian Process to model the prediction on the transformed X, then the prediction is matched with a few observed labels Y (also properly transformed) available in the target domain such that both transformations on X and on Y can be learned. The GP-based approach naturally lends itself to the active learning setting where we can sequentially choose query points from the target dataset. Its final predictive covariance, which combines the uncertainty in the transfer function and the uncertainty in the target label prediction, can be plugged into various GP based active query selection criteria. In this paper we consider (1) Active Learning which reduces total predictive covariance [18, 19]; and (2) Active Surveying [20, 21] which uses an estimation objective that is the sum of all the labels in the test set. As an illustration, we show a toy problem in Fig. 1. As we can see, the support of P(X) in the training domain (red stars) and the support of P(X) in the test domain (blue line) do not overlap, neither do the support of Y across the two domains. The goal is to learn a model on the training data, with a few labeled test data (the filled blue circles), such that we can successfully recover the target function (the blue line). In Fig. 3, we show two real-world grape image datasets. The goal is to transfer the model learned from one kind of grape dataset to another. In Fig. 2, we show the labels (the yield) of each grape image dataset, along with the 3rd dimension of its feature space. We can see that the real-world problem is quite similar to the toy problem, which indicates that the algorithm we propose in this paper will be both useful and practical for real applications. −4 −2 0 2 4 6 −2 −1 0 1 2 3 X Y Synthetic Data source data underlying P(source) underlying P(target) selected test x Figure 1: Toy problem −1.5 −1 −0.5 0 0.5 1 0 1000 2000 3000 4000 5000 6000 3rd dimension of the real grape data labels Riesling Traminette Figure 2: Real grape data Figure 3: A part of one image from each grape dataset We evaluate our methods on synthetic data and real-world grape image data. The experimental results show that our transfer learning algorithms significantly outperform existing methods with few labeled target data points. 2 Related Work Transfer learning is applied when joint distributions differ across source and target domains. Traditional methods for transfer learning use Markov logic networks [4], parameter learning [5, 6], and Bayesian Network structure learning [7], where specific parts of the model are considered to be carried over between tasks. 2 Recently, a large part of transfer learning work has focused on the problem of covariate shift [8, 9, 10]. They consider the case where only P(X) differs across domains, while the conditional distribution P(Y |X) stays the same. The kernel mean matching (KMM) method [9, 10], is one of the algorithms that deal with covariate shift. It minimizes ||µ(Pte) −Ex∼Ptr(x)[β(x)φ(x)]|| over a re-weighting vector β on training data points such that P(X) are matched across domains. However, this work suffers two major problems. First, the conditional distribution P(Y |X) is assumed to be the same, which might not be true under many real-world cases. The algorithm we propose will allow more than just the marginal on X to shift. Second, the KMM method requires that the support of P(Xte) is contained in the support of P(Xtr), i.e., the training set is richer than the test set. This is not necessarily true in many real cases either. Consider the task of transferring yield prediction using images taken from different vineyards. If the images are taken from different grape varieties or during different times of the year, the texture/color could be very different across transferring tasks. In these cases one might mean-center (and possibly also variance-scale) the data to ensure that the support of P(Xte) is contained in (or at least largely overlapped with) P(Xtr). In this paper, we provide an alternative way to solve the support shift problem that allows more flexible transformations than mean-centering and variance-scaling. Some more recent research [12] has focused on modeling target shift (P(Y ) changes), conditional shift (P(X|Y ) changes), and a combination of both. The assumption on target shift is that X depends causally on Y , thus P(Y ) can be re-weighted to match the distributions on X across domains. In conditional shift, the authors apply a location-scale transformation on P(X|Y ) to match P(X). However, the authors still assume that the support of P(Y te) is contained in the support of P(Y tr). In addition, they do not assume they can obtain additional labels, Y te, from the target domain, and thus make no use of the labels Y te, even if some are available. There also have been a few papers handling differences in P(Y |X). [13] designed specific methods (change of representation, adaptation through prior, and instance pruning) to solve the label adaptation problem. [14] relaxed the requirement that the training and testing examples be drawn from the same source distribution in the context of logistic regression. Similar to work on covariate shift, [15] weighted the samples from the source domain to deal with domain adaptation. These settings are relatively restricted while we consider a more general case that both the data points X and the corresponding labels Y can be transformed smoothly across domains. Hence all data will be used without any pruning or weighting, with the advantage that the part of source data which does not help prediction in the target domain will automatically be corrected via the transformation model. The idea of combining transfer learning and active learning has also been studied recently. Both [22] and [23] perform transfer and active learning in multiple stages. The first work uses the source data without any domain adaptation. The second work performs domain adaptation at the beginning without further refinement. [24] and [25] consider active learning under covariate shift and still assume P(Y |X) stays the same. In [16], the authors propose a combined active transfer learning algorithm to handle the general case where P(Y |X) changes smoothly across domains. However, the authors still apply covariate shift algorithms to solve the problem that P(X) might differ across domains, which follows the assumption covariate shift made on the support of P(X). In this paper, we propose an algorithm that allows more flexible transformations (location-scale transform on both X and Y ). Our experiments on real-data shows this additional flexibility pays off in real applications. 3 Approach 3.1 Problem Formulation We are given a set of n labeled training data points, (Xtr, Y tr), from the source domain where each Xtr i ∈ℜdx and each Y tr i ∈ℜdy. We are also given a set of m test data points, Xte, from the target domain. Some of these will have corresponding labels, Y teL. When necessary we will separately denote the subset of Xte that has labels as XteL, and the subset that does not as XteU. For simplicity we restrict Y to be univariate in this paper, but the algorithm we proposed easily extends to the multivariate case. For static transfer learning, the goal is to learn a predictive model using all the given data that minimizes the squared prediction error on the test data, Σm i=1( ˆY te i −Y te i )2 where ˆYi and Yi are the predicted and true labels for the ith test data point. We will evaluate the transfer learning algorithms 3 by including a subset of labeled test data chosen uniformly at random. For active transfer learning the performance metric is the same. The difference is that the active learning algorithm chooses the test points for labeling rather than being given a randomly chosen set. 3.2 Transfer Learning Our strategy is to simultaneously learn a nonlinear mapping Xte →Xnew and Y te →Y ∗. This allows flexible transformations on both X and Y , and our smoothness assumption using GP prior makes the estimation stable. We call this method Support and Model Shift (SMS). We apply the following steps (K in the following represents the Gaussian kernel, and KXY represents the kernel between matrices X and Y , λ ensures invertible kernel matrix): • Transform XteL to Xnew(L) by a location-scale shift: Xnew(L) = WteL ⊙XteL + BteL, such that the support of P(Xnew(L)) is contained in the support of P(Xtr); • Build a Gaussian Process on (Xtr, Y tr) and predict on Xnew(L) to get Y new(L); • Transform Y teL to Y ∗by a location-scale shift: Y ∗= wteL ⊙Y teL + bteL, then we optimize the following empirical loss: arg min WteL,BteL,wteL,bteL,wte ||Y ∗−Y new(L)||2 + λreg||wte −1||2, (1) where WteL, BteL are matrices with the same size as XteL. wteL, bteL are vectors with the same size as Y teL (l by 1, where l is the number of labeled samples in the target domain), and wte is an m by 1 scale vector on all Y te. λreg is a regularization parameter. To ensure the smoothness of the transformation w.r.t. X, we parameterize WteL, BteL, wteL, bteL using: WteL = RteLG, BteL = RteLH, wteL = RteLg, bteL = RteLh, where RteL = LteL(LteL + λI)−1, LteL = KXteLXteL. Following the same smoothness constraint we also have: wte = Rteg, where Rte = KXteXteL(LteL + λI)−1. This parametrization results in the new objective function: arg min G,H,g,h ||(RteLg ⊙Y teL + RteLh) −Y new(L)||2 + λreg||Rteg −1||2. (2) In the objective function, although we minimize the discrepancy between the transformed labels and the predicted labels for only the labeled points in the test domain, we put a regularization term on the transformation for all Xte to ensure overall smoothness in the test domain. Note that the nonlinearity of the transformation makes the SMS approach capable of recovering a fairly wide set of changes, including non-monotonic ones. However, because of the smoothness constraint imposed on the location-scale transformation, it might not recover some extreme cases where the scale or location change is non-smooth/discontinuous. However, under these cases the learning problem by itself would be very challenging. We use a Metropolis-Hasting algorithm to optimize the objective (Eq. 2) which is multi-modal due to the use of the Gaussian kernel. The proposal distribution is given by θt ∼N(θt−1, Σ), where Σ is a diagonal matrix with diagonal elements determined by the magnitude of θ ∈{G, H, g, h}. In addition, the transformation on X requires that the support of P(Xnew) is contained in the support of P(Xtr), which might be hard to achieve on real data, especially when X has a high-dimensional feature space. To ensure that the training data can be better utilized, we relax the support-containing condition by enforcing an overlapping ratio between the transformed Xnew and Xtr, i.e., we reject those proposal distributions which do not lead to a transformation that exceeds this ratio. After obtaining G, H, g, h, we make predictions on XteU by: • Transform XteU to Xnew(U) with the optimized G, H: Xnew(U) = WteU ⊙XteU + BteU = RteUG ⊙XteU + RteUH; • Build a Gaussian Process on (Xtr, Y tr) and predict on Xnew(U) to get Y new(U); • Predict using optimized g, h: ˆY teU = (Y new(U) −bteU)./wteU = (Y new(U) − RteUh)./RteUg, 4 where RteU = KXteUXteL(LteL + λI)−1. With the use of W = RG, B = RH, w = Rg, b = Rh, we allow more flexible transformations than mean-centering and variance-scaling while assuming that the transformations are smooth w.r.t X. We will illustrate the advantage of the proposed method in the experimental section. 3.3 A Kernel Mean Embedding Point of View After the transformation from XteL to Xnew(L), we build a Gaussian Process on (Xtr, Y tr) and predict on Xnew(L) to get Y new(L). This is equivalent to estimating ˆµ[PY new(L)] using conditional distribution embeddings [11] with a linear kernel on Y : ˆµ[PY new(L)] = ˆU[PY tr|Xtr]ˆµ[PXnew(L)] = ψ(ytr)(φ(xtr)⊤φ(xtr) + λI)−1φ⊤(xtr)φ(xnew(L)) = (KXnew(L)Xtr(KXtrXtr + λI)−1Y tr)⊤. Finally we want to find the optimal G, H, g, h such that the distributions on Y are matched across domains, i.e., PY ∗= PY new(L). The objective function Eq. 2 is effectively minimizing the maximum mean discrepancy: ||ˆµ[PY ∗] −ˆµ[PY new(L)]||2 = ||ˆµ[PY ∗] −ˆU[PY tr|Xtr]ˆµ[PXnew(L)]||2, with a Gaussian kernel on X and a linear kernel on Y . The transformation {W, B, w, b} are smooth w.r.t X. Take w for example, ˆµ[Pw] = ˆU[Pw|XteL]ˆµ[PXteL] = ϕ(g)(φ⊤(xteL)φ(xteL) + λI)−1φ⊤(xteL)φ(xteL) = ϕ(g)(LteL + λI)−1LteL = (RteLg)⊤. 3.4 Active Learning We consider two active learning goals and apply a myopic selection criteria to each: (1) Active Learning which reduces total predictive covariance [18, 19]. An optimal myopic selection is achieved by choosing the point which minimizes the trace of the predictive covariance matrix conditioned on that selection. (2) Active Surveying [20, 21] which uses an estimation objective that is the sum of all the labels in the test set. An optimal myopic selection is achieved by choosing the point which minimizes the sum over all elements of the predictive covariance conditioned on that selection. Now we derive the predictive covariance of the SMS approach. Note the transformation between ˆY teU and Y new(U) is given by: ˆY teU = (Y new(U) −bteU)./wteU. Hence we have Cov[ ˆY teU] = diag{1./wteU} · Cov(Y new(U)) · diag{1./wteU}. As for Y new(U), since we build on Gaussian Processes for the prediction from Xnew(U) to Y new(U), it follows: Y new(U)|Xnew(U) ∼N(µ, Σ), where µ = KXnew(U)Xtr(KXtrXtr + λI)−1Y tr, and Σ = KXnew(U)Xnew(U) −KXnew(U)Xtr(KXtrXtr + λI)−1KXtrXnew(U). Note the transformation between Xnew(U) and XteU is given by: Xnew(U) = WteU ⊙XteU + BteU. Integrating over Xnew(U), i.e., P( ˆY new(U)|XteU, D) = R Xnew(U) P( ˆY teU|Xnew(U), D)P(Xnew(U)|XteU)dXnew(U), with D = {Xtr, Y tr, XteL, Y teL}. Using the empirical form of P(Xnew(U)|XteU) which has probability 1/|XteU| for each sample, we get: Cov[ ˆY new(U)|XteU, Xtr, Y tr, XteL, Y teL] = Σ. Plugging the covariance of Y new(U) into Cov[ ˆY teU] we can get the final predictive covariance: Cov( ˆY teU) = diag{1./wteU} · Σ · diag{1./wteU} (3) 4 Experiments 4.1 Synthetic Dataset 4.1.1 Data Description We generate the synthetic data with (using matlab notation): Xtr = randn(80, 1), Y tr = sin(2Xtr +1)+0.1∗randn(80, 1); Xte = [w∗min(Xtr)+b : 0.03 : w∗max(Xtr)/3+b], Y te = sin(2(revw ∗Xte + revb) + 1) + 2. In words, Xtr is drawn from a standard normal distribution, and Y tr is a sine function with Gaussian noise. Xte is drawn from a uniform distribution with a 5 location-scale transform on a subset of Xtr. Y te is the same sine function plus a constant offset. The synthetic dataset used is with w = 0.5; b = 5; revw = 2; revb = −10, as shown in Fig. 1. 4.1.2 Results We compare the SMS approach with the following approaches: (1) Only test x: prediction using labeled test data only; (2) Both x: prediction using both the training data and labeled test data without transformation; (3) Offset: the offset approach [16]; (4) DM: the distribution matching approach [16]; (5) KMM: Kernel mean matching [9]; (6) T/C shift: Target/Conditional shift [12], code is from http://people.tuebingen.mpg.de/kzhang/ Code-TarS.zip. To ensure the fairness of comparison, we apply (3) to (6) using: the original data, the meancentered data, and the mean-centered+variance-scaled (mean-var-centered) data. A detailed comparison with different number of labeled test points are shown in Fig. 4, averaged over 10 experiments. The selection of which test points to label is done uniformly at random for each experiment. The parameters are chosen by cross-validation. Since KMM and T/C shift do not utilize the labeled test points, the MSE of these two approaches are constants as shown in the text box. As we can see from the results, our proposed approach performs better than all other approaches. As an example, the results for transfer learning with 5 labeled test points on the synthetic dataset are shown in Fig. 5. The 5 labeled test points are shown as filled blue circles. First, our proposed model, SMS, can successfully learn both the transformation on X and the transformation on Y , thus resulting in almost a perfect fit on unlabeled test points. Using only labeled test points results in a poor fit towards the right part of the function because there are no observed test labels in that part. Using both training and labeled test points results in a similar fit as using the labeled test points only, because the support of training and test domain do not overlap. The offset approach with mean-centered+variance-scaled data, also results in a poor fit because the training model is not true any more. It would have performed well if the variances are similar across domains. The support of the test data we generated, however, only consists of part of the support of the training data and hence simple variance-scaling does not yield a good match on P(Y |X). The distribution matching approach suffers the same problem. The KMM approach, as mentioned before, applies the same conditional model P(Y |X) across domains, hence it does not perform well. The Target/Conditional Shift approach does not perform well either since it does not utilize any of the labeled test points. Its predicted support of P(Y te), is constrained in the support of P(Y tr), which results in a poor prediction of Y te once there exists an offset between the Y ’s. 2 0 0.5 1 1.5 2 2.5 3 Mean Sqaured Error 5 0 0.2 0.4 0.6 0.8 10 0 0.02 0.04 0.06 SMS use only test x use both x offset (original) offset (mean−centered) offset (mean−var−centered) DM (original) DM (mean−centered) DM (mean−var−centered) original mean mean−var KMM 4.46 2.25 4.63 T/C 1.97 3.51 4.71 Figure 4: Comparison of MSE on the synthetic dataset with {2, 5, 10} labeled test points 4.2 Real-world Dataset 4.2.1 Data Description We have two datasets with grape images taken from vineyards and the number of grapes on them as labels, one is riesling (128 labeled images), another is traminette (96 labeled images), as shown in Figure 3. The goal is to transfer the model learned from one kind of grape dataset to another. The total number of grapes for these two datasets are 19, 253 and 30, 360, respectively. 6 −4 −2 0 2 4 6 −2 −1 0 1 2 3 4 X Y SMS source data target selected test x prediction −4 −2 0 2 4 6 −2 0 2 4 6 X Y use only labeled test x source data target selected test x prediction −2 −1 0 1 2 3 −2 −1 0 1 2 3 X Y offset (mean−var−centered data) source data target selected test x prediction (w=1) prediction (w=5) −2 −1 0 1 2 3 −2 −1 0 1 2 3 X Y DM (mean−var−centered data) source data target selected test x prediction (p=1e−3) prediction (p=0.1) −2 −1 0 1 2 3 −2 −1 0 1 2 3 X Y KMM/TC Shift (mean−centered data) source data target selected test x prediction (KMM) prediction (T/C shift) −2 −1 0 1 2 3 −2 −1 0 1 2 3 X Y KMM/TC Shift (mean−var−centered data) source data target selected test x prediction (KMM) prediction (T/C shift) Figure 5: Comparison of results on the synthetic dataset: An example We extract raw-pixel features from the images, and use Random Kitchen Sinks [1] to get the coefficients as feature vectors [2], resulting in 2177 features. On the traminette dataset we have achieved a cross-validated R-squared correlation of 0.754. Previously specifically designed image processing methods have achieved an R-squared correlation 0.73 [3]. This grape-detection method takes lots of manual labeling work and cannot be directly applied across different varieties of grapes (due to difference in size and color). Our proposed approach for transfer learning, however, can be directly used for different varieties of grapes or even different kinds of crops. 4.2.2 Results The results for transfer learning are shown in Table 1. We compare the SMS approach with the same baselines as in the synthetic experiments. For {DM, offset, KMM, T/C shift}, we only show their best results after applying them on the original data, the mean-centered data, and the mean-centered+variance-scaled data. In each row the result in bold indicates the result with the best RMSE. The result with a star mark indicates that the best result is statistically significant at a p = 0.05 level with unpaired t-tests. We can see that our proposed algorithm yields better results under most cases, especially when the number of labeled test points is small. This means our proposed algorithm can better utilize the source data and will be particularly useful in the early stage of learning model transfer, when only a small number of labels in the target domain is available/required. The Active Learning/Active Surveying results are as shown in Fig. 6. We compare the SMS approach (covariance matrix in Eq. 3 for test point selection, and SMS for prediction) with: (1) combined+SMS: combined covariance [16] for selection, and SMS for prediction; (2) random+SMS: random selection, and SMS for prediction; (3) combined+offset: the Active Learning/Surveying algorithm proposed in [16], using combined covariance for selection, and the corresponding offset approach for prediction. 7 From the results we can see that SMS is the best model overall. SMS is better than the Active Learning/Surveying approach proposed in [16] (combined+offset), especially in the Active Surveying result. Moreover, the combined+SMS result is better than combined+offset, which also indicates that the SMS model is better for prediction than the offset approach in [16]. Also, given the better model that SMS has, there is not much difference in which active learning algorithm we use. However, SMS with active selection is better than SMS with random selection, especially in the Active Learning result. Table 1: RMSE for transfer learning on real data # XteL SMS DM Offset Only test x Both x KMM T/C Shift 5 1197±23∗ 1359±54 1303±39 1479±69 2094±60 2127 2330 10 1046±35∗ 1196±59 1234±53 1323±91 1939±41 2127 2330 15 993±28 1055±27 1063±30 1104±46 1916±36 2127 2330 20 985±13 1056±54 1024±20 1086±74 1832±46 2127 2330 25 982±14 1030±29 1040 ±27 1039±31 1839±41 2127 2330 30 960±19 921±29 961±30 937±29 1663±31 2127 2330 40 890±26 898±30 938±30 901±31 1621±34 2127 2330 50 893±16 925±59 935±59 926±64 1558±51 2127 2330 70 860±40 805±38 819±40 804±37 1399±63 2127 2330 90 791±98 838±102 863±99 838±104 1288±117 2127 2330 0 5 10 15 20 200 400 600 800 1000 1200 1400 1600 Number of labeled test points RMSE Active Learning SMS combined+SMS random+SMS combined+offset 0 5 10 15 20 0 0.5 1 1.5 2 2.5 x 10 4 Number of labeled test points Absolute Error Active Surveying SMS combined+SMS random+SMS combined+offset Figure 6: Active Learning/Surveying results on the real dataset (legend: selection+prediction). 5 Discussion and Conclusion Solving objective Eq. 2 is relatively involved. Gradient methods can be a faster alternative but the non-convex property of the objective makes it harder to find the global optimum using gradient methods. In practice we find it is relatively efficient to solve Eq. 2 with proper initializations (like using the ratio of scale on the support for w, and the offset between the scaled-means for b). In our real-world dataset with 2177 features, it takes about 2.54 minutes on average in a single-threaded MATLAB process on a 3.1 GHz CPU with 8 GB RAM to solve the objective and recover the transformation. As part of the future work we are working on faster ways to solve the proposed objective. In this paper, we proposed a transfer learning algorithm that handles both support and model shift across domains. The algorithm transforms both X and Y by a location-scale shift across domains, then the labels in these two domains are matched such that both transformations can be learned. Since we allow more flexible transformations than mean-centering and variance-scaling, the proposed method yields better results than traditional methods. Results on both synthetic dataset and real-world dataset show the advantage of our proposed method. Acknowledgments This work is supported in part by the US Department of Agriculture under grant number 20126702119958. 8 References [1] Rahimi, A. and Recht, B. Random features for large-scale kernel machines. Advances in Neural Information Processing Systems, 2007. [2] Oliva, Junier B., Neiswanger, Willie, Poczos, Barnabas, Schneider, Jeff, and Xing, Eric. Fast distribution to real regression. AISTATS, 2014. [3] Nuske, S., Gupta, K., Narasihman, S., and Singh., S. Modeling and calibration visual yield estimates in vineyards. International Conference on Field and Service Robotics, 2012. [4] Mihalkova, Lilyana, Huynh, Tuyen, and Mooney., Raymond J. Mapping and revising markov logic networks for transfer learning. Proceedings of the 22nd AAAI Conference on Artificial Intelligence (AAAI-2007), 2007. [5] Do, Cuong B and Ng, Andrew Y. Transfer learning for text classification. Neural Information Processing Systems Foundation, 2005. [6] Raina, Rajat, Ng, Andrew Y., and Koller, Daphne. Constructing informative priors using transfer learning. Proceedings of the Twenty-third International Conference on Machine Learning, 2006. [7] Niculescu-Mizil, Alexandru and Caruana, Rich. Inductive transfer for bayesian network structure learning. Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics (AISTATS), 2007. [8] Shimodaira, Hidetoshi. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference, 90 (2): 227-244, 2000. [9] Huang, Jiayuan, Smola, Alex, Gretton, Arthur, Borgwardt, Karsten, and Schlkopf, Bernhard. Correcting sample selection bias by unlabeled data. NIPS 2007, 2007. [10] Gretton, Arthur, Borgwardt, Karsten M., Rasch, Malte, Scholkopf, Bernhard, and Smola, Alex. A kernel method for the two-sample-problem. NIPS 2007, 2007. [11] Song, Le, Huang, Jonathan, Smola, Alex, and Fukumizu, Kenji. Hilbert space embeddings of conditional distributions with applications to dynamical systems. ICML 2009, 2009. [12] Zhang, Kun, Schlkopf, Bernhard, Muandet, Krikamol, and Wang, Zhikun. Domian adaptation under target and conditional shift. ICML 2013, 2013. [13] Jiang, J. and Zhai., C. Instance weighting for domain adaptation in nlp. Proc. 45th Ann. Meeting of the Assoc. Computational Linguistics, pp. 264-271, 2007. [14] Liao, X., Xue, Y., and Carin, L. Logistic regression with an auxiliary data source. Proc. 21st Intl Conf. Machine Learning, 2005. [15] Sun, Qian, Chattopadhyay, Rita, Panchanathan, Sethuraman, and Ye, Jieping. A two-stage weighting framework for multi-source domain adaptation. NIPS, 2011. [16] Wang, Xuezhi, Huang, Tzu-Kuo, and Schneider, Jeff. Active transfer learning under model shift. ICML, 2014. [17] Pan, Sinno Jialin and Yang, Qiang. A survey on transfer learning. TKDE 2009, 2009. [18] Seo, Sambu, Wallat, Marko, Graepel, Thore, and Obermayer, Klaus. Gaussian process regression: Active data selection and test point rejection. IJCNN, 2000. [19] Ji, Ming and Han, Jiawei. A variance minimization criterion to active learning on graphs. AISTATS, 2012. [20] Garnett, Roman, Krishnamurthy, Yamuna, Xiong, Xuehan, Schneider, Jeff, and Mann, Richard. Bayesian optimal active search and surveying. ICML, 2012. [21] Ma, Yifei, Garnett, Roman, and Schneider, Jeff. Sigma-optimality for active learning on gaussian random fields. NIPS, 2013. [22] Shi, Xiaoxiao, Fan, Wei, and Ren, Jiangtao. Actively transfer domain knowledge. ECML, 2008. [23] Rai, Piyush, Saha, Avishek, III, Hal Daume, and Venkatasubramanian, Suresh. Domain adaptation meets active learning. Active Learning for NLP (ALNLP), Workshop at NAACL-HLT, 2010. [24] Saha, Avishek, Rai, Piyush, III, Hal Daume, Venkatasubramanian, Suresh, and DuVall, Scott L. Active supervised domain adaptation. ECML, 2011. [25] Chattopadhyay, Rita, Fan, Wei, Davidson, Ian, Panchanathan, Sethuraman, and Ye, Jieping. Joint transfer and batch-mode active learning. ICML, 2013. 9
|
2014
|
167
|
5,255
|
Computing Nash Equilibria in Generalized Interdependent Security Games Hau Chan Luis E. Ortiz Department of Computer Science, Stony Brook University {hauchan,leortiz}@cs.stonybrook.edu Abstract We study the computational complexity of computing Nash equilibria in generalized interdependent-security (IDS) games. Like traditional IDS games, originally introduced by economists and risk-assessment experts Heal and Kunreuther about a decade ago, generalized IDS games model agents’ voluntary investment decisions when facing potential direct risk and transfer-risk exposure from other agents. A distinct feature of generalized IDS games, however, is that full investment can reduce transfer risk. As a result, depending on the transfer-risk reduction level, generalized IDS games may exhibit strategic complementarity (SC) or strategic substitutability (SS). We consider three variants of generalized IDS games in which players exhibit only SC, only SS, and both SC+SS. We show that determining whether there is a pure-strategy Nash equilibrium (PSNE) in SC+SStype games is NP-complete, while computing a single PSNE in SC-type games takes worst-case polynomial time. As for the problem of computing all mixedstrategy Nash equilibria (MSNE) efficiently, we produce a partial characterization. Whenever each agent in the game is indiscriminate in terms of the transfer-risk exposure to the other agents, a case that Kearns and Ortiz originally studied in the context of traditional IDS games in their NIPS 2003 paper, we can compute all MSNE that satisfy some ordering constraints in polynomial time in all three game variants. Yet, there is a computational barrier in the general (transfer) case: we show that the computational problem is as hard as the Pure-Nash-Extension problem, also originally introduced by Kearns and Ortiz, and that it is NP-complete for all three variants. Finally, we experimentally examine and discuss the practical impact that the additional protection from transfer risk allowed in generalized IDS games has on MSNE by solving several randomly-generated instances of SC+SS-type games with graph structures taken from several real-world datasets. 1 Introduction Interdependent Security (IDS) games [1] model the interaction among multiple agents where each agent chooses whether to invest in some form of security to prevent a potential loss based on both direct and indirect (transfer) risks. In this context, an agent’s direct risk is that which is not the result of the other agents’ decisions, while indirect (transfer) risk is that which does. Let us be more concrete and consider an application of IDS games. Imagine that you are an owner of an apartment. One day, there was a fire alarm in the apartment complex. Luckily, it was nothing major: nobody got hurt. As a result, you realize that your apartment can be easily burnt down because you do not have any fire extinguishing mechanism such as a sprinkler system. However, as you wonder about the cost and the effectiveness of the fire extinguishing mechanism, you notice that the fire extinguishing mechanism can only protect your apartment if a small fire originates in your apartment. If a fire originates in the floor below, or above, or even the apartment adjacent to yours, then you are out of luck: by the time the fire gets to your apartment, the fire would be fierce enough 1 1 2 3 4 5 6 7 8 9 11 12 13 14 18 20 22 32 31 10 28 29 33 17 34 24 26 30 25 27 15 16 19 21 23 1 2 3 4 5 6 7 8 9 11 12 13 14 18 20 22 32 31 10 28 29 33 17 34 24 26 30 25 27 15 16 19 21 23 1 2 3 4 5 6 7 8 9 11 12 13 14 18 20 22 32 31 10 28 29 33 17 34 24 26 30 25 27 15 16 19 21 23 α ∼N(0.4, 0.2) α ∼N(0.6, 0.2) α ∼N(0.8, 0.2) Figure 1: α-IDS Game of Zachary Karate Club at a Nash Equilibrium. Legend: Square ≡SC player, Circle ≡SS player, Colored ≡Invest, and Non-Colored ≡No Invest Table 1: Complexity of α-IDS Games Game type One PSNE All MSNE Pure-Nash Extension SC Always Exists Uniform Transfers (UT) NP-Complete (n SC players) O(n2) O(n4) SS Maybe Not Exist UT wrt Ordering 1 (n SS players) O(n4) SC + SS NP-complete UT wrt Ordering 1 (nsc + nss = n) O(n4 scn3 ss + n3 scn4 ss) already. You realize that if other apartment owners invest in the fire extinguishing mechanism, the likelihood of their fires reaching you decreases drastically. As a result, you debate whether or not to invest in the fire extinguishing mechanism given whether or not the other owners invest in the fire extinguishing mechanism. Indeed, making things more interesting, you are not the only one going through this decision process; assuming that everybody is concerned about their safety in the apartment complex, everybody in the apartment complex wants to decide on whether or not to invest in the fire extinguishing mechanism given the individual decision of other owners. To be more specific, in the IDS games, the agents are the apartment owners, each apartment owner needs to make a decision as to whether or not to invest in the fire extinguishing mechanism based on cost, potential loss, as well as the direct and indirect (transfer) risks. The direct risk here is the chance that an agent will start a fire (e.g., forgetting to turn off gas burners or overloading electrical outlets). The transfer risk here is the chance that a fire from somebody else’s (unprotected) apartment will spread to other apartments. Moreover, transfer risk comes from the direct neighbors and cannot be re-transferred. For example, if a fire from your neighbors is transferred to you, then, in this model, this fire cannot be re-transferred to your neighbors. Of course, IDS games can be used to model other practical real-world situations such as airline security [2], vaccination [3], and cargo shipment [4]. See Laszka et al. [5] for a survey on IDS games. Note that in the apartment complex example, the fire extinguishing mechanism does not protect an agent from fires that originate from other apartments. In this work, we consider a more general, and possibly also more realistic, framework of IDS games where investment can partially protect the indirect risk (i.e., investment in the fire extinguishing mechanism can partially extinguish some fires that originate from others). To distinguish the naming scheme, we will call these generalized IDS games as α-IDS games where α is a vector of probabilities, one for each agent, specifying the probability that the transfer risk will not be protected by the investment. In other words, agent i’s investment can reduce indirect risk by probability (1-αi). Given an α, the players can be partitioned into two types: the SC type and the SS type. The SC players behave strategic complementarily: they invest if sufficiently many people invest. On the other hand, the SS players behave strategic substitutability: they do not invest if too many people invest. As a preview of how the α can affect the number of SC and SS players and Nash equilibria, which is the solution concept used here (formally defined in the next section), Figure 1 presents the result of our simulation of an instance of SC+SS α-IDS games using the Zachary Karate Club network [6]. The nodes are the players, and the edge between nodes u and v represents the potential transfers from u to v and v to u. As we increase α’s value, the number of SC players increases while the 2 number of SS players decreases. Interestingly, almost all of the SC players invest, and all of the SS players are “free riding” as they do not invest at the NE. Our goal here is to understand the behavior of the players in α-IDS games. Achieving this goal will depend on the type of players, as characterized by the α, and our ability to efficiently compute NE, among other things. While Heal and Kunreuther [1] and Chan et al. [7] previously proposed similar models, we are unaware of any work on computing NE in α-IDS games and analyzing agents’ equilibrium behavior. The closest work to ours is Kearns and Ortiz [8], where they consider the standard/traditional IDS model in which one cannot protect against the indirect risk (i.e., α ≡1). In particular, we study the computational aspects of computing NE of α-IDS games in cases of all game players being (1) SC, (2) SS, and (3) both SC and SS. Our contributions, summarized in Table 1, follow. • We show that determining whether there is a PSNE in (3) is NP-complete. However, there is a polynomial-time algorithm to compute a PSNE for (1). We identify some instances for (2) where PSNE does and does not exist. • We study the instances of α-IDS games where we can compute all NE. We show that if the transfer probabilities are uniform (independent of the destination), then there is a polynomial-time algorithm to compute all NE in case (1). Cases (2) and (3) may still take exponential time to compute all NE. However, based on some ordering constraints, we are able to efficiently compute all NE that satisfy the ordering constraints. • We consider the general-transfer case and show that the pure-Nash-extension problem [8], which, roughly, is the problem of determining whether there is a PSNE consistent with some partial assignments of actions to some players, is NP-complete for cases (1), (2), and (3). This implies that computing all NE is likely as hard. • We perform experiments on several randomly-generated instances of SC+SS α-IDS games using various real-world graph structures to show α’s effect on the number of SC and SS players and on the NE of the games . 2 α-IDS games: preliminaries, model definition, and solution concepts In this section, we borrow definitions and notations of (graphical) IDS games from Kearns et al. [9], Kearns and Ortiz [8], and Chan et al. [7]. In an α-IDS game, we have an underlying (directed) graph G = (V, E) where V = {1, 2, ..., n} represents the n players and E = {(i, j)|qij > 0} such that qij is the transfer probability that player i will transfer the bad event to player j. As such, we define Pa(i) and Ch(i) as the set of parents and children of player i in G, respectively. In an α-IDS game, each player i has to make a decision as to whether or not to invest in protection. Therefore, the action or pure-strategy of player i is binary, denoted here by ai, with ai = 1 if i decides to invest and ai = 0 otherwise. We denote the joint-action or joint-pure-strategy of all players by the vector a ≡(a1, . . . , an). For convenience, we denote by a−i all components of a except that for player i. Similarly, given S ⊂V , we denote by aS and a−S all components of a corresponding to players in S and V −S, respectively. We also use the notation a ≡(ai, a−i) ≡ (aS, a−S) when clear from context. In addition, in an α-IDS game, there is a cost of investment Ci and loss Li associated with the bad event occurring, either through direct or indirect (transfered) contamination. For convenience, we denote the cost-to-loss ratio of player i by Ri ≡Ci/Li. We can parametrize the direct risk as pi, the probability that player i will experience the bad event from direct contamination. Specific to α-IDS games, the parameter αi denotes the probability of ineffectiveness of full investment in security (i.e., ai = 1) against player i’s transfer risk. Said differently, the parameter αi models the degree to which investment in security can potentially reduce player i’s transfer risk. Player i’s transfer-risk function ri(aPa(i)) ≡1 −si(aPa(i)), where si(aPa(i)) ≡Q j∈Pa(i)[1 −(1 −aj)qji], is a function of joint-actions of Pa(i) because of the potential overall transfer probability (and thus risk) from Pa(i) to i given Pa(i)’s actions. One can think of the function si as the transfer-safety function of player i. The expression of si makes explicit the implicit assumption that the transfers of the bad event are independent. Putting the above together, the cost function of player i is Mi(ai, aP a(i)) ≡ai[Ci + αiri(aP a(i))Li] + (1 −ai)[pi + (1 −pi)ri(a−i)]Li . 3 Note that the safety function describes the situation where a player j can only be “risky” to player i if and only if j does not invest in protection. We assume, without loss of generality (wlog), that Ci ≪Li, or equivalently, that Ri ≪1; otherwise, not investing would be a dominant strategy. While a syntactically minor addition to the traditional IDS model, the parameter α introduces a major semantic difference and an additional complexity over the traditional model. The semantic difference is perhaps clearer from examining the best response of the players: player i invests if Ci + αiri(aPa(i))Li < [pi + (1 −pi)ri(aPa(i))]Li ⇔Ri −pi < (1 −pi −αi)ri(aPa(i)) . The expression (1 −pi −αi) is positive when αi < 1 −pi and negative when αi > 1 −pi. The best response condition flips when the expression is negative. (When αi = 1 −pi, player i’s investment decision simplifies because the player’s internal risk fully determines the optimal choice.) In fact, the parameter α induces a partition of the set of players based on whether the corresponding αi value is higher or lower than 1 −pi. We will call the set of players with αi > 1 −pi the set of strategic complementarity (SC) players. SC players exhibit as optimal behavior that their preference for investing increases as more players invest: they are “followers.” The set of players with αi < 1 −pi is the set of strategic substitutability (SS) players. In this case, SS players’ preference for investing decreases as more players invest: they are “free riders.” For all i ∈SC, let ∆sc i ≡1 − Ri−pi 1−pi−αi ; similarly for ∆ss i , for i ∈SS. We can define the bestresponse correspondence for player i ∈SC as BRsc i (aPa(i)) ≡ 0, ∆sc i > si(aPa(i)), 1, ∆sc i < si(aPa(i)), [0, 1], ∆sc i = si(aPa(i)) . The best-response correspondence BRss i for player i ∈SS is similar, except that we replace ∆sc i by ∆ss i and “reverse” the strict inequalities above. We use the best-response correspondence to define NE (i.e., both PSNE and MSNE). We introduce randomized strategies: in a joint-mixed-strategy x ∈ [0, 1]n, each component xi corresponds to player i’s probability of invest (i.e. Pr(ai = 1) = xi). Player i’s decision depends on expected cost, and, with abuse of notation, we denote it by Mi(x). Definition A joint-action a ∈{0, 1}n is a pure-strategy Nash equilibrium (PSNE) of an IDS game if ai ∈BRi(aPa(i)) for each player i. Replacing a with a joint mixed-strategy x ∈[0, 1]n in the equilibrium condition and the respective functions it depends on leads to the condition for x being a mixed-strategy Nash equilibrium (MSNE). Note that the set of PSNE ⊂MSNE. Hence, we use NE and MSNE interchangably. For general (and graphical) games, determining the existence of PSNE is NP-complete [10]. MSNE always exist [11], but computing a MSNE is PPAD-complete [12–14]. 3 Computational results for α-IDS games Figure 2: 3-SAT-induced α-IDS game graph In this section, we present and discuss the results of our computational study of α-IDS games. We begin by considering the problem of computing PSNE, then moving to the more general problem of computing MSNE. 3.1 Finding a PSNE in α-IDS games In this subsection, we look at the complexity of determining a PSNE in α-IDS games, and finding it if one exists. Our first result follows. Theorem 1 Determining whether there is a PSNE in n-player SC+SS α-IDS games is NP-complete. Proof (Sketch) We are going to reduce an instance of a 3-SAT variant into our problem. Each clause of the 3-SAT variant contains either only negated variables or only un-negated variables [15]. We 4 have an SC player for each clause and two SS players for each variable. The clause players invest if there exists a neighbor (its literal) that invests. For each variable vi, we introduce two players vi and ¯vi with preference for mutually opposite actions. They invest if there exists a neighbor (its clause and ¯vi) that does not invest. Figure 2 depicts the basic structure of the game. Nodes at the botton-row of the graph correspond to a variable, where the un-negated-variables-clauses and negated-variables-clauses are connected to their corresponding un-negated-variable and negated variable with bidirectional transfer probability q. Setting the parameters of the clause players. Wlog, we can set the parameters to be identical for all clause players i: find Ri > 0 and αi > 1 −pi such that (1 −q)2 > ∆sc i > (1 −q)3. Setting the parameters of the variables players. Wlog, we can set the parameters to be identical for all variable players i: find Ri > 0 and αi < 1 −pi such that 1 > ∆ss i > (1 −q). We now show that there exists a satisfiable assignment if and only if there exists a PSNE. Satisfiable assignment =⇒PSNE. Suppose that we have a satisfiable assignment of the variant 3-SAT. This implies that every clause player is playing invest. Moreover, for each clause player, there must be some corresponding variable players that play invest. Given a satisfiable assignment, negated and un-negated variable players cannot play the same action. One of them must be playing invest and the other must be playing no-invest. The investing variable is best-responding because at least one of the players (namely its negation) is playing not invest. The not investing variable is best-responding because all of its neighbors are investing. Hence, all the players are best-responding to each other and thus we have a PSNE. PSNE =⇒ satisfiable assignment. (a) First we show that at every PSNE, all of the clause players must play invest. For the sake of contradiction, suppose that there is a PSNE in which there are some clause players that play no-invest. For the no-invest clause players, all of their variables must play no-invest at PSNE. However, by the best-response conditions of the variable players, if there exists a clause player that plays no-invest, then at least one of the variable players must play invest, which contradicts the fact that we have a PSNE. (b) We now show that at every PSNE, the unnegated variable player and the corresponding negated variable player must play different actions. Suppose that there is a PSNE, in which both of the players play the same action (i) no-invest or (ii) invest. In the case of no-invest (i), by their best-response conditions (given that at every PSNE all clause players play invest), none of the variables are best-responding so one of them must switch from playing no-invest to invest. In the case of invest (ii), again by the best-response condition, one of them must play no-invest. (c) Finally, we need to show that at every PSNE there must be a variable player that makes every clause player play invest. To see this, note that, by the clause’s best-response condition, there must be at least one variable player playing invest. If there is a clause that plays invest when none of its variable players play invest, then the clause player would not be best-responding. ⊓⊔ 3.1.1 SC α-IDS games What is the complexity of determining whether a PSNE exists in SC α-IDS games (i.e. αi > 1−pi)? It turns out that SC players have the characteristics of following the actions of other agents. If there are enough SC players who invest, then some remaining SC player(s) will follow suit. This is evident from the safety function and the best-response condition. Consider the dynamics in which everybody starts off with no-invest. If there are some players that are not best-responding, then their best (dominant) strategy is to invest. We can safely change the actions of those players to invest. Then, for the remaining players, we continue to check to see if any of them is not best-responding. If not, we have a PSNE, otherwise, we change the strategy of the not best-responding players to invest. The process continues until we have reached a PSNE. Theorem 2 There is an O(n2)-time algorithm to compute a PSNE of any n-player SC α-IDS game. Note that once a player plays invest, other players will either stay no-invest or move to invest. The no-investing players do not affect the strategy of the players that already have decided to invest. Players that have decided to invest will continue to invest because only more players will invest. 3.1.2 SS α-IDS games Unlike the SC case, an SS α-IDS game may not have a PSNE when n > 2. 5 Proposition 1 Suppose we have an n-player SS α-IDS game with 1 > ∆ss i > (1 −qji) where j is the parent of i. (a) If the game graph is a directed tree, then the game has a PSNE. (b) If the game graph is a a directed cycle, then the game has a PSNE if and only if n is even. Proof (a) The root of the tree will always play no-invest while the immediate children of the root will always play invest at a PSNE. Moreover, assigning the action invest or no-invest to any node that has an odd or even (undirected) distance to the root, respectively, completes the PSNE. (b) For even n, an assignment in which any independent set of n 2 players play invest form a PSNE. For odd n, suppose there is a PSNE in which I players invest and N players do not invest, such that I + N = n. The investing players must have I parents that do not invest and the non-investing players must have N parents that play invest. Moreover, I ≤N and N ≤I implies that I = N. Hence, an odd n cycle cannot have a PSNE. ⊓⊔ We leave the computational complexity of determining whether SS α-IDS games have PSNE open. 3.2 Computing all NE in α-IDS games We now study whether we can compute all MSNE of α-IDS games. We prove that we can compute all MSNE in polynomial time in the case of uniform-transfer SC α-IDS games, and a subset of all MSNE in the case of SS and SC+SS games. A uniform transfer α-IDS game is an α-IDS game where the transfer probability to another players from a particular player is the same regardless of the destination. More formally, qij = δi for all players i and j (i ̸= j). Hence, we have a complete graph with bidirectional transfer probabilities. We can express the overall safety function given joint mixed-strategy x ∈[0, 1]n as s(x) = Qn i=1[1−(1−xi)δi]. Now, we can determine the best response of SC or SS player exactly based solely on the values of ∆sc i (1 −(1 −ai)δi), for SC, relative to s(x); similarly for SS. We assume, wlog, that for all players i, Ri > 0, δi > 0, pi > 0, and αi > 0. Given a joint mixedstrategy x, we partition the players by type wrt x: let I ≡I(x) ≡{i | xi = 1}, N ≡N(x) ≡ {i | xi = 0}, and P ≡P(x) ≡{i | 0 < xi < 1} be the set of players that, wrt x, fully invest in protection, do not invest in protection, and partially invest in protection, respectively. 3.2.1 Uniform-transfer SC α-IDS games The results of this section are non-trivial extensions of those of Kearns and Ortiz [8]. In particular, we can construct a polynomial-time algorithm to compute all MSNE of a uniform-transfer SC α-IDS game, along the same lines of Kearns and Ortiz [8], by extending their Ordering Lemma (their Lemma 3) and Partial-Ordering Lemma (their Lemma 4). 1 Appendixes A.1 and B of the supplementary material contain our versions of the lemmas and detailed pseudocode for the algorithm, respectively. A running-time analysis similar to that for traditional uniform-transfer IDS games done by Kearns and Ortiz [8] yields our next algorithmic result. Theorem 3 There exists an O(n4)-time algorithm to compute all MSNE of an uniform-transfer n-player SC α-IDS game. The significance of the theorem lies in its simplicity. That we can extend almost the same computational results, and structural implications on the solution space, to a considerably more general, and perhaps even more realistic, model, via what in hindsight were simple adaptations, is positive. 3.2.2 Uniform-transfer SS α-IDS games Unlike the SC case, the ordering we get for the SS case does not yield an analogous lemma. Nevertheless, it turns out that we can still determine the mixed strategies of the partially-investing players in P relative to a partition. The result is a Partial-Investment Lemma that is analogous to that of Kearns and Ortiz [8] for traditional IDS games. 2 For completeness, Appendix A.2 of the supplementary material formally states the lemma. We remind the reader that the significance and strength 1Take their Ri/pi’s and replace them with our corresponding ∆sc i ’s. 2Take their Lemma 4 and replace Ri/pi there by ∆ss i here, and replace the expression for V there by V ≡[maxk∈N(1 −δk)∆ss k , mini∈I ∆ss i ]. 6 of this non-trivial extension lies in its simplicity, and particularly when we note that the nature of the SS case is the complete opposite of the version of IDS games studied by Kearns and Ortiz [8]. Indeed, a naive way to compute all NE is to consider all of the possible combinations of players into the investment, partial investment, and not investment sets and apply the Partial-Investment Lemma alluded to in the previous paragraph to compute the mixed strategies. However, this would take O(nss3nss) worst-case time to compute any equilibrium. So, how can we efficiently perform this computation? As mentioned earlier, SS players are less likely to invest when there is a large number of players investing and have “opposite” behavior as the SC players (i.e., the best response is flipped). Hence, imposing a “flip” ordering (Ordering 1) that is opposite of the SC case seems natural. If we assume such a specific ordering of the players at equilibrium, then we can compute all NE consistent with that specific ordering efficiently, as we discuss earlier for the SC case. Mirroring the SC α-IDS game, we settle for computing all NE that satisfy the following ordering. Ordering 1 For all i ∈Iss, j ∈P ss, and k ∈N ss, (1 −δk)∆ss k ≤(1 −δj)∆ss j < ∆ss j (1 −δj)∆ss j ≤∆ss j ≤∆ss i (1 −δk)∆ss k ≤(1 −δi)∆ss i ≤∆ss i The first and last set of inequalities (ignoring the middle one) follow from the consistency constraint imposed by the overall safety function. The middle set of inequalities restrict and reduce the number of possible NE configurations we need to check. It is possible that the (1−δk)∆ss k > (1−δj)∆ss j or (1−δk)∆ss k > (1−δi)∆ss i at an NE, but we do not consider those types of NE. Our hardness results presented in the upcoming Section 3.2.4 suggest that, in general, computing all MSNE without any of the constraints above is likely hard. (See Algorithm 2 of the supplementary material.) Theorem 4 There exists an O(n4)-time algorithm to compute all MSNE consistent with Ordering 1 of an uniform-transfer n-player SS α-IDS game. 3.2.3 Uniform-transfer SC+SS α-IDS games For the uniform variant of the SC+SS α-IDS games, we could partition the players into either SC or SS and modify the respective algorithms to compute all NE. Unfortunately, this is computationally infeasible because we can only compute all NE in polynomial time in the SC case. Again, if we settle for computing all NE consistent with Ordering 1, then we can devise an efficient algorithm. From now on, the fact that we are only considering NE consistent with Ordering 1 is implicit, unless noted otherwise. The idea is to partition the players into a class of SC and a class of SS players. From the characterizations stated earlier, it is clear that there are only a polynomial number of possible partitions we need to check for each class of players. Since the ordering results are based on the same overall safety function, the orderings of SC and SS players do not affect each other. Hence, wlog, starting with the algorithm described earlier as a based routine for SC players, we do the following. For each possible equilibrium configuration of the SC players, we first run the algorithm described in the previous section for SS players and then test whether the resulting joint mixed-strategy is a NE. This guarantees that we check every possible equilibrium combination. A running-time analysis yields our next result. Theorem 5 There exists an O(n4 scn3 ss + n3 scn4 ss)-time algorithm to compute all NE consistent with Ordering 1 of an uniform-transfer n-player SC+SS α-IDS game, where n = nsc + nss. 3.2.4 Computing all MSNE of arbitrary α-IDS games is intractable, in general In this section, we prove that determining whether there exists a PSNE consistent with a partialassignment of the actions to some players is NP-complete, even if the transfer probability takes only two values: δi ∈{0, q} for q ∈(0, 1). We consider the pure-Nash-extension problem [8] for binary-action n-player games that takes as input a description of the game and a partial assignment a ∈{0, 1, ∗}n. We want to know whether there is a complete assignment b ∈{0, 1}n consistent with a. Indeed, computing all NE is at least as difficult as the pure-Nash extension problem. Appendix C presents proofs of our next results. 7 Table 2: Level of Investment of SC+SS α-IDS Games at Nash Equilibrium High Ci Li αi ∼N (0.4, 0.2) αi ∼N (0.8, 0.2) αi ∈[0, 1] Datasets %SS %SC Invest %SS Invest %SS %SC Invest %SS Invest %SS %SC Invest %SS Invest Karate Club 76.18 100.00 21.37 12.35 100.00 0.00 56.18 100.00 14.88 Les Miserables 75.45 100.00 17.93 11.82 99.85 0.67 55.06 99.40 14.84 College Football 75.65 100.00 15.47 11.57 100.00 0.00 55.39 100.00 13.46 Power Grid 75.47 97.76* 19.38* 12.82 98.79* 2.13* 55.01 97.31** 15.90** Wiki Vote 75.55 97.46* 17.87* 12.78 98.92* 2.06* 55.02 97.00** 14.75** Email Enron 75.29 95.97* 19.91* 12.53 97.92* 2.24* 54.78 94.39** 16.84** Low Ci Li αi ∼N (0.4, 0.2) αi ∼N (0.8, 0.2) αi ∈[0, 1] Karate Club 99.41 100.00 49.64 60.59 100.00 23.19 86.18 100.00 41.34 Les Miserables 98.96 100.00 51.17 59.22 100.00 28.34 85.71 100.00 49.26 College Football 98.87 100.00 60.42 61.48 100.00 28.30 86.35 100.00 54.87 Power Grid 98.68 99.13* 49.45* 59.41 98.81* 28.66* 85.20 99.13** 45.07** Wiki Vote 98.62 98.30* 46.50* 59.89 97.38* 27.54* 85.01 98.51** 44.45** Email Enron 98.73 97.96** 49.80** 59.85 96.48* 29.32* 84.94 98.0** 44.72** *=0.001-NE, **=0.005-NE, %SS (%SC) = Percentage of SS (SC) players, N (µ, σ2) =normal distribution with mean µ and variance σ2 Theorem 6 The pure-Nash extension problem for n-player SC α-IDS games is NP-complete. A similar proof argument yields the following computational-complexity result. Theorem 7 The pure-Nash extension problem for n-player SS α-IDS games is NP-complete. Combining Theorems 6 and 7 yields the next corollary. Corollary 1 The pure-Nash extension problem for n-player SC+SS α-IDS games is NP-complete. 4 Preliminary Experimental Results To illustrate the impact of the α parameter on α-IDS games, we perform experiments on randomlygenerated instances of α-IDS games in which we compute a possibly approximate NE. Given ϵ > 0, in an approximate ϵ-NE each individual’s unilateral deviation cannot reduce the individual’s expected cost by more than ϵ. The underlying structures of the instances use network graphs from publicly-available, real-world datasets [6, 16–20]. Appendix D of the supplementary material provides more specific information on the size of the different graphs in the real-world dataset. The number of nodes/players ranges from 34 to ≈37K while the number of edges ranges from 78 to ≈368K. The table lists the graphs in increasing size (from top to bottom). To generate each instance we generate (1) Ci/Li where Ci = 103∗(1+random(0, 1)) and Li = 104 (or Li = 104/3) to obtain a low (high) cost-to-loss ratio and αi values as specified in the experiments; (2) pi such that ∆sc i or ∆ss i is [0, 1]; and (3) qji’s consistent with probabilistic constraints relative to the other parameters (i.e. pi +P j∈P a(i) qji ≤1). On each instance, we initialize the players’ mixed strategies uniformly at random and run a simple gradient-dynamics heuristic based on regret minimization [21–23] until we reach an (ϵ) NE. In short, we update the strategies of all non-ϵ-best-responding players i at each round t according to x(t+1) i ←x(t) i −10 × (Mi(1, x(t) Pa(i)) −Mi(0, x(t) Pa(i))). Note that for ϵ-NE to be well-defined, all Mis’ values are normalized. Given that our main interest is to study the structural properties of arbitrary α-IDS games, our hardness results of computing NE in such games justify the use of a heuristic as we do here. (Kearns and Ortiz [8] and Chan et al. [7] also used a similar heuristic in their experiments.). Table 2 shows the average level of investment at NE over ten runs on each graph instance. We observe that higher α values generate more SC players, consistent with the nature of the game instances. Almost all of the SC players invest while most of the SS players do not invest, regardless of the number of players in the games and the α values. This makes sense because of the nature of the SC and SS players. Going from high to low cost-to-loss ratio, we see that the number of SS players and the percentage of SS players investing at a NE increase across all α values. In both high and low cost-to-loss ratio cases, we see a similar behavior in which the majority of the SS players do not invest (≈50%). Acknowledgments This material is based upon work supported by an NSF Graduate Research Fellowship (first author) and an NSF CAREER Award IIS-1054541 (second author). 8 References [1] Geoffrey Heal and Howard Kunreuther. Interdependent security: A general model. Working Paper 10706, National Bureau of Economic Research, August 2004. [2] Geoffrey Heal and Howard Kunreuther. IDS models of airline security. Journal of Conflict Resolution, 49(2):201–217, April 2005. [3] Geoffrey Heal and Howard Kunreuther. The vaccination game. Working paper, Wharton Risk Management and Decision Processes Center, January 2005. [4] Konstantinos Gkonis and Harilaos Psaraftis. Container transportation as an interdependent security problem. Journal of Transportation Security, 3:197–211, 2010. [5] Aron Laszka, Mark Felegyhazi, and Levente Buttyan. A survey of interdependent information security games. ACM Comput. Surv., 47(2):23:1–23:38, August 2014. [6] W.W. Zachary. An information flow model for conflict and fission in small groups. Journal of Anthropological Research, 33:452–473, 1977. [7] Hau Chan, Michael Ceyko, and Luis E. Ortiz. Interdependent defense games: Modeling interdependent security under deliberate attacks. In Proceedings of the Conference on Uncertainty in Artificial Intelligence, UAI ’12, pages 152–162, 2012. [8] Michael Kearns and Luis E. Ortiz. Algorithms for interdependent security games. In Advances in Neural Information Processing Systems, NIPS ’04, pages 561–568, 2004. [9] Michael Kearns, Michael Littman, and Satinder Singh. Graphical models for game theory. In Proceedings of the Conference on Uncertainty in Artificial Intelligence, UAI’ 01, pages 253–260, 2001. [10] Georg Gottlob, Gianluigi Greco, and Francesco Scarcello. Pure Nash equilibria: Hard and easy games. In Proceedings of the 9th Conference on Theoretical Aspects of Rationality and Knowledge, TARK ’03, pages 215–230, 2003. [11] John F. Nash. Equilibrium points in n-person games. Proceedings of the National Academy of Sciences of the United States of America, 35(1):48–49, Jan. 1950. [12] Constantinos Daskalakis, Paul W. Goldberg, and Christos H. Papadimitriou. The complexity of computing a Nash equilibrium. In Proceedings of the Thirty-eighth Annual ACM Symposium on Theory of Computing, STOC ’06, pages 71–78, 2006. [13] Xi Chen, Xiaotie Deng, and Shang-Hua Teng. Settling the complexity of computing two-player Nash equilibria. J. ACM, 56(3):14:1–14:57, May 2009. [14] Edith Elkind, Leslie Ann Goldberg, and Paul Goldberg. Nash equilibria in graphical games on trees revisited. In Proceedings of the 7th ACM Conference on Electronic Commerce, EC ’06, pages 100–109, 2006. [15] Michael R. Garey and David S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman & Co., New York, NY, USA, 1979. [16] Donald E. Knuth. The Stanford GraphBase: A Platform for Combinatorial Computing. ACM, New York, NY, USA, 1993. [17] M. Girvan and M. E. J. Newman. Community structure in social and biological networks. Proceedings of the National Academy of Sciences, 99(12):7821–7826, 2002. [18] D.J. Watts and S.H. Strogatz. Collective dynamics of ’small-world’ networks. Nature, 393: 440–442, 1998. [19] Jure Leskovec, Daniel Huttenlocher, and Jon Kleinberg. Signed networks in social media. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’10, pages 1361–1370, 2010. [20] Bryan Klimt and Yiming Yang. Introducing the Enron corpus. In CEAS, 2004. [21] Drew Fudenberg and David K. Levine. The Theory of Learning in Games, volume 1 of MIT Press Books. The MIT Press, June 1998. [22] Noam Nisan, Tim Roughgarden, ´Eva Tardos, and Vijay V. Vazirani, editors. Algorithmic Game Theory. Cambridge University Press, 2007. [23] Yoav Shoham and Kevin Leyton-Brown. Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations. Cambridge University Press, Cambridge, UK, 2009. 9
|
2014
|
168
|
5,256
|
Multitask learning meets tensor factorization: task imputation via convex optimization Kishan Wimalawarne Tokyo Institute of Technology Meguro-ku, Tokyo, Japan kishan@sg.cs.titech.ac.jp Masashi Sugiyama The University of Tokyo Bunkyo-ku, Tokyo, Japan sugi@k.u-tokyo.ac.jp Ryota Tomioka TTI-C Illinois, Chicago, USA tomioka@ttic.edu Abstract We study a multitask learning problem in which each task is parametrized by a weight vector and indexed by a pair of indices, which can be e.g, (consumer, time). The weight vectors can be collected into a tensor and the (multilinear-)rank of the tensor controls the amount of sharing of information among tasks. Two types of convex relaxations have recently been proposed for the tensor multilinear rank. However, we argue that both of them are not optimal in the context of multitask learning in which the dimensions or multilinear rank are typically heterogeneous. We propose a new norm, which we call the scaled latent trace norm and analyze the excess risk of all the three norms. The results apply to various settings including matrix and tensor completion, multitask learning, and multilinear multitask learning. Both the theory and experiments support the advantage of the new norm when the tensor is not equal-sized and we do not a priori know which mode is low rank. 1 Introduction We consider supervised multitask learning problems [1, 6, 7] in which the tasks are indexed by a pair of indices known as multilinear multitask learning (MLMTL) [17, 19]. For example, when we would like to predict the ratings of different aspects (e.g., quality of service, food, etc) of restaurants by different customers, the tasks would be indexed by aspects × customers. When each task is parametrized by a weight vector over features, the goal would be to learn a features × aspects × customers tensor. Another possible task dimension would be time, since the ratings may change over time. This setting is interesting, because it would allow us to exploit the similarities across different customers as well as similarities across different aspects or time-points. Furthermore this would allow us to perform task imputation, that is to learn weights for tasks for which we have no training examples. On the other hand, the conventional matrix-based multitask learning (MTL) [2, 3, 13, 16] may fail to capture the higher order structure if we consider learning a flat features × tasks matrix and would require at least r samples, where r is the rank of the matrix to be learned, for each task. Recently several norms that induce low-rank tensors in the sense of Tucker decomposition or multilinear singular value decomposition [8, 9, 14, 25] have been proposed. The mean squared error for recovering a n1 × · · · × nK tensor of multilinear rank (r1, . . . , rK) from its noisy version scale as O(( 1 K ∑K k=1 √rk)2( 1 K ∑K k=1 1/√nk)2) for the overlapped trace norm [23]. On the other hand, the error of the latent trace norm scales as O(mink rk/ mink nk) in the same setting [21]. Thus while the latent trace norm has the better dependence in terms of the multilinear rank rk, it has the worse dependence in terms of the dimensions nk. Tensors that arise in multitask learning typically have heterogeneous dimensions. For example, the number of aspects for a restaurant (quality of service, food, atmosphere, etc.) would be much 1 Table 1: Tensor denoising performance using different norms. The mean squared error ||| ˆ W − W∗|||2 F /N is shown for the denoising algorithms (3) using different norms for tensors. Overlapped trace norm Latent trace norm Scaled latent trace norm Op (( 1 K K ∑ k=1 √rk )2( 1 K K ∑ k=1 1/√nk )2) Op ( min k rk/ min k nk ) Op ( min k (rk/nk) ) smaller than the number of customers or the number of features. In addition, it is a priori unclear which mode (or dimension) would have the most redundancy or sharing that could be exploited by multitask learning. Some of the modes may have full ranks if there is no sharing of information along them. Therefore, both the latent trace norm and the overlapped trace norm would suffer either from the heterogeneous multilinear rank or the heterogeneous dimensions in this context. In this paper, we propose a modification to the latent trace norm whose mean squared error scales as O(mink(rk/nk)) in the same setting, which is better than both the previously proposed extensions of trace norm for tensors. We study the excess risk of the three norms through their Rademacher complexities in various settings including matrix completion, multitask learning, and MLMTL. The new analysis allows us to also study the tensor completion setting, which was only empirically studied in [22, 23]. Our analysis consistently shows the advantage of the proposed scaled latent trace norm in various settings in which the dimensions or ranks are heterogeneous. Experiments on both synthetic and real data sets are also consistent with our theoretical findings. 2 Norms for tensors and their denoising performance Let W ∈Rn1×···×nK be a K-way tensor. We denote the total number of entries by N := ∏K k=1 nk. A mode-k fiber of W is an nk dimensional vector we obtain by fixing all but the kth index. The mode-k unfolding W (k) of W is the nk×N/nk matrix formed by concatenating all the N/nk modek fibers along columns. We say that W has multilinear rank (r1, . . . , rK) if rk = rank(W (k)). 2.1 Existing norms for tensors First we review two norms proposed in literature in order to convexify tensor decomposition. The overlapped trace norm (see [12, 15, 18, 22]) is defined as the sum of the trace norms of the mode-k unfoldings as follows: |||W|||overlap = ∑K k=1 ∥W (k)∥tr, (1) where ∥· ∥tr is the trace norm (also known as the nuclear norm) [10, 20], which is defined as the absolute sum of singular values. Romera-Paredes et al. [17] has used the overlapped trace norm in MLMTL. The latent trace norm [21, 22] is defined as the infimum over K tensors as follows: |||W|||latent = inf W(1)+···+W(K)=W ∑K k=1 ∥W (k) (k)∥tr. (2) Table 1 summarizes the denoising performance in mean squared error analyzed in Tomioka and Suzuki [21] for the above two norms. The setting is as follows: we observe a noisy version Y of a tensor W∗with multilinear rank (r1, . . . , rK) and would like to recover W∗by solving ˆ W = argmin W (1 2 |||W −Y|||2 F + λ |||W|||⋆ ) , (3) where |||·|||⋆is either the overlapped trace norm or the latent trace norm. We can see that while the latent trace norm has the better dependence in terms of the multilinear rank, it has the worse dependence in terms of the dimensions. Intuitively, the latent trace norm recognizes the mode with the lowest rank. However, it does not have a good control of the dimensions; in fact, the factor 2 1/ mink nk comes from the fact that for a random tensor X with i.i.d. Gaussian entries, the expectation of the dual norm ∥X∥latent∗= maxk ∥X(k)∥op behaves like Op( √ maxk N/nk), where ∥·∥op is the operator norm. 2.2 A new norm In order to correct the unfavorable behavior of the dual norm, we propose the scaled latent trace norm. It is defined similarly to the latent trace norm with weights 1/√nk as follows: |||W|||scaled = inf W(1)+···+W(K)=W K ∑ k=1 1 √nk ∥W (k) (k)∥tr. (4) Now the expectation of the dual norm ∥X∥scaled∗= maxk √nk∥X(k)∥op behaves like Op( √ N) for X with random i.i.d. Gaussian entries and combined with the following relation |||W|||scaled ≤min k √rk nk |||W|||F , (5) we obtain the scaling of the mean squared error in the last column of Table 1. We can see that the scaled latent norm recognizes the mode with the lowest rank relative to its dimension. 3 Theory for multilinear multitask learning We consider T = PQ supervised learning tasks. Training samples (xipq, yipq)mpq i=1 ((p, q) ∈S) are provided for a relatively small fraction of the task index pairs S ⊂[P] × [Q]. Each task is parametrized by a weight vector wpq ∈Rd, which can be collected into a 3-way tensor W = (wpq) ∈Rd×P ×Q whose (p, q) fiber is wpq. We define the learning problem as follows: ˆ W = argmin W∈Rd×P ×Q ˆL(W), subject to |||W|||⋆≤B0, (6) where the norm |||·|||⋆is either the overlapped trace norm, latent trace norm, or the scaled latent trace norm, and the empirical risk ˆL is defined as follows: ˆL(W) = 1 |S| ∑ (p,q)∈S 1 mpq mpq ∑ i=1 ℓ(⟨xipq, wpq⟩−yipq) . The true risk we are interested in minimizing is defined as follows: L(W) = 1 PQ ∑ p,q E(x,y)∼Ppqℓ(⟨x, wpq⟩−y) , where Ppq is the distribution from which the samples (xipq, yipq)mpq i=1 are drawn from. The next lemma relates the excess risk L( ˆ W) −L(W∗) with the expected dual norm E |||D|||⋆∗ through Rademacher complexity. Lemma 1. We assume that the output yipq is bounded as |yipq| ≤b, and the number of samples mpq ≥m > 0 for the observed tasks. We also assume that the loss function ℓis Lipschitz continuous with the constant Λ, bounded in [0, c] and ℓ(0) = 0. Let W∗be any tensor such that |||W∗|||⋆≤B0. Then with probability at least 1 −δ, any minimizer of (6) satisfies the following bound: L( ˆ W) −L(W∗) ≤2Λ ( 2B0 |S| E |||D|||⋆∗+ b√ρ √ |S|m ) + c′ √ log(2/δ) 2|S|m , where c′ = c + 1, |||·|||⋆∗is the dual norm of |||·|||⋆, ρ := 1 |S| ∑ (p,q)∈S mpq m . The tensor D ∈Rd×P ×Q is defined as the sum D = ∑ (p,q)∈S ∑mpq i=1 Zipq, where Zipq ∈Rd×P ×Q is defined as (p′, q′)th fiber of Zipq = { 1 mpq σipqxipq, if p = p′ and q = q′, 0, otherwise. Here σipq ∈{−1, +1} are Rademacher random variables and the expectation in the above inequality is with respect to σipq, the random draw of tasks S, and the training samples (xipq, yipq)mpq i=1 . 3 Proof. The proof is a standard one following the line of [5] and it is presented in Appendix A. The next theorem computes the expected dual norm E |||D|||⋆∗for the three norms for tensors (the proof can be found in Appendix B). Theorem 1. We assume that Cpq := E[xipqxipq⊤] ⪯κ dId and there is a constant R > 0 such that ∥xipq∥≤R almost surely. Let us define D1 := d + PQ, D2 := P + dQ, D3 := Q + dP. In order to simplify the presentation, we assume that maxk Dk ≥3 and dPQ ≥max(d2, P 2, Q2). For the overlapped trace norm, the latent trace norm, and the scaled latent trace norm, the expectation E |||D|||⋆∗can be bounded as follows: 1 |S|E |||D|||overlap∗≤C min k (√ κ m|S|dPQDk log Dk + R m|S| log Dk ) , (7) 1 |S|E |||D|||latent∗≤C′ (√ κ m|S|dPQ max k (Dk log Dk) + R m|S| log(max k Dk) ) , (8) 1 |S|E |||D|||scaled∗≤C′′ (√ κ m|S| log(max k Dk) + R√maxk nk m|S| log(max k Dk) ) , (9) where C, C′, C′′ are constants, n1 = d, n2 = P, and n3 = Q. Furthermore, if m|S| ≥ R2(maxk nk) log(maxk Dk)/κ, the O(1/m|S|) terms in the above inequalities can be dropped. Note that the assumption that the norm of xipq is bounded is natural because the target yipq is also bounded. The parameter κ in the assumption Cpq ⪯κ/dId controls the amount of correlation in the data. Since Tr(C) = E∥xipq∥2 ≤R2, we have κ = O(1) when the features are uncorrelated; on the other hand, we have κ = O(d), if they lie in a one dimensional subspace. The number of samples m|S| = ˜O(maxk nk) is enough to drop the O(1/m|S|) term even if κ = O(1). Now we state the consequences of Theorem 1 for the three norms for tensors. The common assumptions are the same as in Lemma 1 and Theorem 1. We also assume m|S| ≥ R2(maxk nk) log(maxk Dk)/κ to drop the O(1/m|S|) terms. Let W∗be any d × P × Q tensor with multilinear-rank (r1, r2, r3) and bounded element-wise as |||W∗|||ℓ∞≤B. Corollary 1 (Overlapped trace norm). With probability at least 1 −δ, any minimizer of (6) with |||W|||overlap ≤B √ ∥r∥1/2dPQ satisfies the following inequality: L( ˆ W) −L(W∗) ≤c1ΛB √ κ m|S|∥r∥1/2 min k (Dk log Dk) + c2Λb √ ρ m|S| + c3 √ log(2/δ) m|S| , where ∥r∥1/2 = (∑3 k=1 √rk/3)2 and c1, c2, c3 are constants. Note that Tomioka et al. [23] obtained a bound that depends on (∑3 k=1 √Dk/3)2 instead of min(Dk log Dk). Although the minimum may look better than the average, our bound has the worse constant K = 3 hidden in c1. The log Dk factor allows us to apply the above result to the setting of tensor completion as we show below. Corollary 2 (Latent trace norm). With probability at least 1 −δ, any minimizer of (6) with |||W|||latent ≤B√mink rkdPQ satisfies the following inequality: L( ˆ W) −L(W∗) ≤c′ 1ΛB √ κ m|S| min k rk max k (Dk log Dk) + c2Λb √ ρ m|S| + c3 √ log(2/δ) m|S| , where c′ 1, c2, c3 are constants. Corollary 3 (Scaled latent trace norm). With probability at least 1 −δ, any minimizer of (6) with |||W|||scaled ≤B √ mink(rk/nk)dPQ satisfies the following inequality: L( ˆ W) −L(W∗) ≤c′′ 1ΛB √ κ m|S| min k ( rk nk ) dPQ log(max k Dk) + c2Λb √ ρ m|S| + c3 √ log(2/δ) m|S| , where n1 = d, n2 = P, n3 = Q, and c′′ 1, c2, c3 are constants. 4 We summarize the implications of the above corollaries for different settings in Table 2. We almost recover the settings for matrix completion [11] and multitask learning (MTL) [16]. Note that these simpler problems sometimes disguise themselves as the more general tensor completion or multilinear multitask learning problems. Therefore it is important that the new tensor based norms adapts to the simplicity of the problems in these cases. Matrix completion is when d = κ = m = r1 = 1, and we assume that r2 = r3 = r < P, Q. The sample complexities are the number of samples |S| that we need to make the leading term in Corollaries 1, 2, and 3 equal ϵ. We can see that the overlapped trace norm and the scaled latent trace norm recover the known result for matrix completion [11]. The plain latent trace norm requires O(PQ) samples because it recognizes the first mode as the mode with the lowest rank 1. Although the rank r of the last two modes are low relative to their dimensions, the latent trace norm fails to recognize this. In multitask learning (MTL), only the first mode corresponding to features has a low rank r and the other two modes have full rank. Note that a tensor is a matrix when its multilinear rank is full except for one mode. We also assume that all the pairs (p, q) are observed (|S| = PQ) as in [16]. The sample complexities are defined the same way as above with respect to the number of samples m because |S| is fixed. The homogeneous case is when d = P = Q. The heterogeneous case is when P ≤r < d. Our bound for the overlapped trace norm is almost as good as the one in [16] but has an multiplicative log(PQ) factor (as oppose to their additive log(PQ) term) and ∥r∥1/2 ≥r. Also note that the results in [16] can be applied when d is much larger than P and Q. Turning back to our bounds, both the latent trace norm and its scaled version can perform as well as knowing the mode with the lowest rank (the first mode) (see also [21]) when d = P = Q. However, when the dimensions are heterogeneous, similarly to the matrix completion case above, the plain latent trace norm fails to recognize the lowrank-ness of the first mode and Table 2: Sample complexities of the overlapped trace norm, latent trace norm, and the scaled latent trace norm in various settings. The common factor 1/ϵ2 is omitted from the sample complexities. The sample complexities are defined with respect to |S| for matrix completion, m for multitask learning, and m|S| for tensor completion and multilinear multitask learning. In the heterogeneous cases, we assume P ≤r < r′. We define ∥r∥1/2 = (∑3 k=1 √rk/K)2 and N := n1n2n3. Sample complexities (per 1/ϵ2) (n1, n2, n3) (r1, r2, r3) (κ, B, |S|) Overlap Latent Scaled Matrix completion [11] P! Q! 1! (1, r, r) (1, 1, |S|) ∥r∥1/2(P + Q) log(P + Q) PQ log(PQ) r(P + Q) log(PQ) MTL [16] (homogeneous case) d! d2! (r, d, d) (d, 1/ √ d, d2) ∥r∥1/2 log(d2) r log(d2) r log(d2) MTL (heterogeneous case) d! PQ! (r, P, r′) (d, 1/ √ d, PQ) ∥r∥1/2 log(PQ) d log(dQ) r log(dQ) MLMTL [17] (homogeneous case) d! d! d! (r1, r2, r3) (κ, 1, |S|) κ∥r∥1/2d2 log(d2) κ(min k rk)d2 log(d2) κ(min k rk)d2 log(d2) MLMTL [17] (heterogeneous case) P! Q! d! (r, P, r′) (κ, 1, |S|) κ∥r∥1/2PQ log(PQ) κdPQ log(dQ) κ min(rPQ, dPr′) log(dQ) Tensor completion n2! n3! n1! (r1, r2, r3) (1, 1, |S|) ∥r∥1/2 min k (Dk log Dk) min k rk max k (Dk log Dk) min k ( rk nk )N log(max k Dk) 5 requires O(d) samples, because the second mode has the lowest rank P. In multilinear multitask learning (MLMTL) [17], any mode could possibly be low rank but it is a priori unknown. The sample complexities are defined the same way as above with respect to m|S|. The homogeneous case is when d = P = Q. The heterogeneous case is when the first mode or the third mode is low rank but P ≤r < d. Similarly to the above two settings, the overlapped trace norm has a mild dependence on the dimensions but a higher dependence on the rank ∥r∥1/2 ≥r. The latent trace norm performs as well as knowing the mode that has the lowest rank in the homogeneous case. However, it fails to recognize the mode with the lowest rank relative to its dimension. The scaled latent trace norm does this and although it has a higher logarithmic dependence, it is competitive in both cases. Finally, our bounds also hold for tensor completion. Although Tomioka et al. [22, 23] studied tensor completion algorithms, their analysis assumed that the inputs xipq are drawn from a Gaussian distribution, which does not hold for tensor completion. Note that in our setting xipq can be an indicator vector that has one in the jth position uniformly over 1, . . . , d. In this case, κ = 1. The sample complexities of different norms with respect to m|S| is shown in the last row of Table 2. The sample complexity for the overlapped trace norm is the same as the one in [23] with a logarithmic factor. The sample complexities for the latent and scaled latent trace norms are new. Again we can see that while the latent trace norm recognize the mode with the lowest rank, the scaled latent trace norm is able to recognize the mode with the lowest rank relative to its dimension. 4 Experiments We conducted several experiments to evaluate performances of tensor based multitask learning setting we have discussed in Section 3. In Section 4.1, we discuss simulation we conducted using synthetic data sets. In Sections 4.2 and 4.3, we discuss experiments on two real world data sets, namely the Restaurant data set [26] and School Effectiveness data set [3, 4]. Both of our real world data sets have heterogeneous dimensions (see Figure 2) and it is a priori unclear across which mode has the most amount of information sharing. 4.1 Synthetic data sets The true d × P × Q tensor W∗was generated by first sampling a r1 × r2 × r3 core tensor and then multiplying random orthonormal matrix to each of its modes. For each task (p, q) ∈[P] × [Q], we generated training set of m vectors (xipq, yipq)m i=1 by first sampling xipq from the standard normal distribution and then computing yipq = ⟨xipq, wpq⟩+ νi, where νi was drawn from a zero-mean normal distribution with variance 0.1. We used the penalty formulation of (6) with the squared loss and selected the regularization parameter λ using two-fold cross validation on the training set from the range 0.01 to 10 with the interval 0.1. In addition to the three norms for tensors we discussed in the previous section, we evaluated the matrix-based multitask learning approaches that penalizes the trace norm of the unfolding of W at specific modes. The conventional convex multitask learning [2, 3, 16] corresponds to one of these approaches that penalizes the trace norm of the first unfolding ∥W (1)∥tr. The convex MLMTL in [17] corresponds to the overlapped trace norm. In the first experiment, we chose d = P = Q = 10 and r1 = r2 = r3 = 3. Therefore, both the dimensions and the multilinear rank are homogeneous. The result is shown in Figure 1(a). The overlapped trace norm performed the best, the matrix-based approaches performed next, and the latent trace norm and the scaled latent trace norm were the worst. The scaling of the latent trace norm had no effect because the dimensions were homogeneous. Since the sample complexities for all the methods were the same in this setting (see Table 2), the difference in the performances could be explained by a constant factor K(= 3) that is not shown in the sample complexities. In the second experiment, we chose the dimensions to be homogeneous as d = P = Q = 10, but (r1, r2, r3) = (3, 6, 8). The result is shown in Figure 1(b). In this setting, the (scaled) latent trace norm and the mode-1 regularization performed the best. The lower the rank of the corresponding mode, the lower were the error of the matrix-based MTL approaches. The overlapped trace norm was somewhat in the middle of the three matrix-based approaches. 6 10 20 30 40 50 60 70 80 90 100 110 0.01 0.011 0.012 0.013 0.014 0.015 0.016 Sample size MSE Overlapped trace norm Latent trace norm Scaled Latent trace norm Mode 1 Mode 2 Mode 3 (a) Synthetic experiment for the case when both the dimensions and the ranks are homogeneous. The true tensor is 10×10×10 with multilinear rank (3, 3, 3). 10 20 30 40 50 60 70 80 90 100 110 0.01 0.012 0.014 0.016 0.018 0.02 0.022 Sample size MSE Overlapped trace norm Latent trace norm Scaled Latent trace norm Mode 1 Mode 2 Mode 3 (b) Synthetic experiment for the case when the dimensions are homogeneous but the ranks are heterogeneous. The true tensor is 10 × 10 × 10 with multilinear rank (3, 6, 8). 10 20 30 40 50 60 70 80 90 100 110 0.01 0.012 0.014 0.016 0.018 0.02 0.022 0.024 Sample size MSE Overlapped trace norm Latent trace norm Scaled Latent trace norm Mode 1 Mode 2 Mode 3 (c) Synthetic experiment for the case when both the dimensions and the ranks are heterogeneous. The true tensor is 10×3×10 with multilinear rank (3, 3, 8). Figure 1: Results for the synthetic data sets. In the last experiment, we chose both the dimensions and the multilinear rank to be heterogeneous as (d, P, Q) = (10, 3, 10) and (r1, r2, r3) = (3, 3, 8). The result is shown in Figure 1(c). Clearly the first mode had the lowest rank relative to its dimension. However, the latent trace norm recognizes the second mode as the mode with the lowest rank and performed similarly to the mode-2 regularization. The overlapped trace norm performed better but it was worse than the mode-1 regularization. The scaled latent trace norm performed comparably to the mode-1 regularization. 4.2 Restaurant data set The Restaurant data set contains data for a recommendation system for restaurants where different customers have given ratings to different aspects of each restaurant. Following the same approach as in [17] we modelled the problem as a MLMTL problem with d = 45 features, P = 3 aspects, and Q = 138 customers. The total number of instances for all the tasks were 3483 and we randomly selected training set of sizes 400, 800, 1200, 1600, 2000, 2400, and 2800. When the size was small many tasks contained no training example. We also selected 250 instances as the validation set and the rest was used as the test set. The regularization parameter for each norm was selected by minimizing the mean squared error on the validation set from the candidate values in the interval [50, 1000] for the overlapped, [0.5, 40] for the latent, [6000, 20000] for the scaled latent norms, respectively. We also evaluated matrix-based MTL approaches on different modes and ridge regression (Frobenius norm regularization; abbreviated as RR) as baselines. The convex MLMTL in [17] corresponds to the overlapped trace norm. The result is shown in Figure 2(a). We found the multilinear rank of the solution obtained by the overlapped trace norm to be typically (1, 3, 3). This was consistent with the fact that the performances of the mode-1 regularization and the ridge regression were equal. In other words, the effective dimension of the first mode (features) was one instead of 45. The latent trace norm recognized the first mode as the mode with the lowest rank and it failed to take advantage of the low-rank-ness of the second and the third modes. The scaled latent trace norm was able to perform the best matching with the performances of mode-2 and mode-3 regularization. When the number of samples was above 2400, the latent trace norm caught up with other methods, probably because the effective dimension became higher in this regime. 4.3 School data set The data set comes from the inner London Education Authority (ILEA) consisting of examination records from 15362 students at 139 schools in years 1985, 1986, and 1987. We followed [4] for the preprocessing of categorical attributes and obtained 24 features. Previously Argyriou et al. [3] modeled this data set as a 27 × 139 matrix-based MTL problem in which the year was modeled as a 7 0 500 1000 1500 2000 2500 3000 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 Sample size MSE Overlapped trace norm Latent trace norm Scaled latent trace norm Mode 1 Mode 2 Mode 3 RR (a) Result for the 45 × 3 × 138 Restaurant data set. 0 2000 4000 6000 8000 10000 12000 10 15 20 25 30 35 40 Sample size Explained variance Overlapped trace norm Latent trace norm Scaled latent trace norm Mode 1 Mode 2 Mode 3 RR (b) Result for the 24 × 139 × 3 School data set. Figure 2: Results for the real world data sets. trinomial attribute. Instead here we model this data set as a 24×139×3 MLMTL problem in which the third mode corresponds to the year. Following earlier papers, [3, 4], we used the percentage of explained variance, defined as 100 · (1 −(test MSE)/(variance of y)), as the evaluation metric. The results are shown in Figure 2(b). First, ridge regression performed the worst because it was not able to take advantage of the low-rank-ness of any mode. Second, the plain latent trace norm performed similarly to the mode-3 regularization probably because the dimension 3 was lower than the rank of the other two modes. Clearly the scaled latent trace norm performed the best matching with the performance of the mode-2 regularization; probably the second mode had the most redundancy. The performance of the overlapped trace norm was comparable or slightly better than the mode-1 regularization. The percentage of the explained variance of the latent trace norm exceeds 30 % around sample size 4000 (around 30 samples per school), which is higher than the Hierarchical Bayes [4] (around 29.5 %) and matrix-based MTL [3] (around 26.7 %) that used around 80 samples per school. 5 Discussion Using tensors for modeling multitask learning [17, 19] is a promising direction that allows us to take advantage of similarity of tasks in multiple dimensions and even make prediction for a task with no training example. However, having multiple modes, we would have to face with more hyperparameters to choose in the conventional nonconvex tensor decomposition framework. Convex relaxation of tensor multilinear rank allows us to side-step this issue. In fact, we have shown that the sample complexity of the latent trace norm is as good as knowing the mode with the lowest rank. This is consistent with the analysis of [21] in the tensor denoising setting (see Table 1). In the setting of tensor-based MTL, however, the notion of mode with the lowest rank may be vacuous because some modes may have very low dimension. In fact, the sample complexity of the latent trace norm can be as bad as not using any low-rank-ness at all if there is a mode with dimension lower than the rank of the other modes. The scaled latent trace norm we proposed in this paper recognizes the mode with the lowest rank relative to its dimension and lead to the competitive sample complexities in various settings we have shown in Table 2. Acknowledgment: MS acknowledges support from the JST CREST program. References [1] R. K. Ando and T. Zhang. A framework for learning predictive structures from multiple tasks and unlabeled data. J. Mach. Learn. Res., 6:1817–1853, 2005. [2] A. Argyriou, T. Evgeniou, and M. Pontil. Multi-task feature learning. In B. Sch¨olkopf, J. Platt, and T. Hoffman, editors, Adv. Neural. Inf. Process. Syst. 19, pages 41–48. MIT Press, Cambridge, MA, 2007. 8 [3] A. Argyriou, M. Pontil, Y. Ying, and C. A. Micchelli. A spectral regularization framework for multi-task structure learning. In J. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Adv. Neural. Inf. Process. Syst. 20, pages 25–32. Curran Associates, Inc., 2008. [4] B. Bakker and T. Heskes. Task clustering and gating for bayesian multitask learning. J. Mach. Learn. Res., 4:83–99, 2003. [5] P. L. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: Risk bounds and structural results. J. Mach. Learn. Res., 3:463–482, 2002. [6] J. Baxter. A model of inductive bias learning. J. Artif. Intell. Res., 12:149–198, 2000. [7] R. Caruana. Multitask learning. Machine learning, 28(1):41–75, 1997. [8] L. De Lathauwer, B. De Moor, and J. Vandewalle. A multilinear singular value decomposition. SIAM J. Matrix Anal. Appl., 21(4):1253–1278, 2000. [9] L. De Lathauwer, B. De Moor, and J. Vandewalle. On the best rank-1 and rank-(R1, R2, . . . , RN) approximation of higher-order tensors. SIAM J. Matrix Anal. Appl., 21(4):1324–1342, 2000. [10] M. Fazel, H. Hindi, and S. P. Boyd. A Rank Minimization Heuristic with Application to Minimum Order System Approximation. In Proc. of the American Control Conference, 2001. [11] R. Foygel and N. Srebro. Concentration-based guarantees for low-rank matrix reconstruction. Arxiv preprint arXiv:1102.3923, 2011. [12] S. Gandy, B. Recht, and I. Yamada. Tensor completion and low-n-rank tensor recovery via convex optimization. Inverse Problems, 27:025010, 2011. [13] S. M. Kakade, S. Shalev-Shwartz, and A. Tewari. Regularization techniques for learning with matrices. J. Mach. Learn. Res., 13(1):1865–1890, 2012. [14] T. G. Kolda and B. W. Bader. Tensor decompositions and applications. SIAM Review, 51(3):455–500, 2009. [15] J. Liu, P. Musialski, P. Wonka, and J. Ye. Tensor completion for estimating missing values in visual data. In Proc. ICCV, 2009. [16] A. Maurer and M. Pontil. Excess risk bounds for multitask learning with trace norm regularization. Technical report, arXiv:1212.1496, 2012. [17] B. Romera-Paredes, H. Aung, N. Bianchi-Berthouze, and M. Pontil. Multilinear multitask learning. In Proceedings of the 30th International Conference on Machine Learning, pages 1444–1452, 2013. [18] M. Signoretto, L. De Lathauwer, and J. Suykens. Nuclear norms for tensors and their use for convex multilinear estimation. Technical Report 10-186, ESAT-SISTA, K.U.Leuven, 2010. [19] M. Signoretto, L. De Lathauwer, and J. A. K. Suykens. Learning tensors in reproducing kernel hilbert spaces with multilinear spectral penalties. Technical report, arXiv:1310.4977, 2013. [20] N. Srebro, J. D. M. Rennie, and T. S. Jaakkola. Maximum-margin matrix factorization. In L. K. Saul, Y. Weiss, and L. Bottou, editors, Adv. Neural. Inf. Process. Syst. 17, pages 1329–1336. MIT Press, Cambridge, MA, 2005. [21] R. Tomioka and T. Suzuki. Convex tensor decomposition via structured Schatten norm regularization. In C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Weinberger, editors, Adv. Neural. Inf. Process. Syst. 26, pages 1331–1339. 2013. [22] R. Tomioka, K. Hayashi, and H. Kashima. Estimation of low-rank tensors via convex optimization. Technical report, arXiv:1010.0789, 2011. [23] R. Tomioka, T. Suzuki, K. Hayashi, and H. Kashima. Statistical performance of convex tensor decomposition. In Adv. Neural. Inf. Process. Syst. 24, pages 972–980. 2011. [24] J. A. Tropp. User-friendly tail bounds for sums of random matrices. Found. Comput. Math., 12(4): 389–434, 2012. [25] L. R. Tucker. Some mathematical notes on three-mode factor analysis. Psychometrika, 31(3):279–311, 1966. [26] B. Vargas-Govea, G. Gonz´alez-Serna, and R. Ponce-Medellın. Effects of relevant contextual features in the performance of a restaurant recommender system. In Proceedings of 3rd Workshop on Context-Aware Recommender Systems. 2011. 9
|
2014
|
169
|
5,257
|
Efficient Partial Monitoring with Prior Information Hastagiri P Vanchinathan Dept. of Computer Science ETH Z¨urich, Switzerland hastagiri@inf.ethz.ch G´abor Bart´ok Dept. of Computer Science ETH Z¨urich, Switzerland bartok@inf.ethz.ch Andreas Krause Dept. of Computer Science ETH Z¨urich, Switzerland krausea@ethz.ch Abstract Partial monitoring is a general model for online learning with limited feedback: a learner chooses actions in a sequential manner while an opponent chooses outcomes. In every round, the learner suffers some loss and receives some feedback based on the action and the outcome. The goal of the learner is to minimize her cumulative loss. Applications range from dynamic pricing to label-efficient prediction to dueling bandits. In this paper, we assume that we are given some prior information about the distribution based on which the opponent generates the outcomes. We propose BPM, a family of new efficient algorithms whose core is to track the outcome distribution with an ellipsoid centered around the estimated distribution. We show that our algorithm provably enjoys near-optimal regret rate for locally observable partial-monitoring problems against stochastic opponents. As demonstrated with experiments on synthetic as well as real-world data, the algorithm outperforms previous approaches, even for very uninformed priors, with an order of magnitude smaller regret and lower running time. 1 Introduction We consider Partial Monitoring, a repeated game where in every time step a learner chooses an action while, simultaneously, an opponent chooses an outcome. Then the player receives a loss based on the action and outcome chosen. The learner also receives some feedback based on which she can make better decisions in subsequent time steps. The goal of the learner is to minimize her cumulative loss over some time horizon. The performance of the learner is measured by the regret, the excess cumulative loss of the learner compared to that of the best fixed constant action. If the regret scales linearly with the time horizon, it means that the learner does not approach the performance of the best action, that is, the learner fails to learn the problem. On the other hand, sublinear regret indicates that the disadvantage of the learner compared to the best fixed strategy fades with time. Games in which the learner receives the outcome as feedback after every time step are called online learning with full information. This special case of partial monitoring has been addressed by (among others) Vovk [1] and Littlestone and Warmuth [2], who designed the randomized algorithm Exponentially Weighted Averages (EWA) as a learner strategy. This algorithm achieves Θ(√TlogN) expected regret against any opponent, where N is the number of actions and T is the time horizon. This regret growth rate is also proven to be optimal. Another well-studied special case is the so-called multi-armed bandit problem. In this feedback model, the learner gets to observe the loss she suffered in every time step. That is, the learner does not receive any information about losses of actions she did not choose. Asymptotically optimal results were obtained by Audibert and Bubeck [3], who designed the Implicitly Normalized Forecaster (INF) that achieves the minimax optimal Θ( √ TN) regret growth rate.1 1The algorithm Exp3 due to Auer et al. [4] achieves the same rate up to a logarithmic factor. 1 However, not all online learning problems have one of the above feedback structures. An important example for a problem that does not fit in either full-information or bandit problems is dynamic pricing. Consider the problem of a vendor wanting to sell his products to customers for the best possible price. When a customer comes in, she (secretly) decides on a maximum price she is willing to buy his product for, while the vendor has to set a price without knowing the customer’s preferences. The loss of the vendor is some preset constant if the customer did not buy the product, and an “opportunity loss”, when the product was sold cheaper than the customer’s maximum. The feedback, on the other hand, is merely an indicator whether the transaction happened or not. Dynamic pricing is just one of the practical applications of partial monitoring. Label efficient prediction, in its simplest form, has three actions: the first two actions are guesses of a binary outcome but provide no information, while the third action provides information about the outcome for some unit loss as the price. This can be thought of an abstract form of spam filtering: the first two actions correspond to putting an email to the inbox and the spam folder, the third action corresponds to asking the user if the email is spam or not. Another problem that can be cast as partial monitoring is that of dueling bandits [5, 6] in which the learner chooses a pair of actions in every time step, the loss she suffers is the average loss of the two actions, and the feedback is which action was “better”. In online learning, we distinguish different models of how the opponent generates the outcomes. In the mildest version called stochastic or stationary memoryless, the opponent chooses an outcome distribution before the game starts and then selects outcomes in an iid random manner drawn from the chosen distribution. The oblivious adversarial opponent chooses the outcomes arbitrarily, but without observing the actions of the learner. This selection method is equivalent to choosing an outcome sequence ahead of time. Finally, the non-oblivious or adaptive adversarial opponent chooses outcomes arbitrarily with the possibility of looking at past actions of the learner. In this work, we focus on strategies against stochastic opponents. Related work Partial monitoring was first addressed in the seminal paper of Piccolboni and Schindelhauer [7], who designed and analyzed the algorithm FeedExp3. The algorithm’s main idea is to maintain an unbiased estimate for the loss of each action in every time step, and then use these estimates to run the full-information algorithm (EWA). Piccolboni and Schindelhauer [7] proved an O(T 3/4) upper bound on the regret (not taking into account the number of actions) for games for which learning is at all possible. This bound was later improved by Cesa-Bianchi et al. [8] to O(T 2/3), who also constructed an example of a problem for which this bound is optimal. From the above bounds it can be seen that not all partial-monitoring problems have the same level of difficulty: while bandit problems enjoy an O( √ T) regret rate, some partial-monitoring problems have Ω(T 2/3) regret. To this end, Bart´ok et al. [9] showed that partial-monitoring problems with finitely many actions and outcomes can be classified into four groups: trivial with zero regret, easy with eΘ( √ T) regret, hard with Θ(T 2/3) regret, and hopeless with linear regret. The distinguishing feature between easy and hard problems is the local observability condition, an algebraic condition on the feedback structure that can be efficiently verified for any problem. Bart´ok et al. [9] showed the above classification against stochastic opponents with the help of algorithm BALATON. This algorithm keeps track of estimates of the loss difference of “neighboring” action pairs and eliminates actions that are highly likely to be suboptimal. Since then, several algorithms have been proposed that achieve the eO( √ T) regret bound for easy games [10, 11]. All these algorithms rely on the core idea of estimating the expected loss difference between pairs of actions. Our contributions In this paper, we introduce BPM (Bayes-update Partial Monitoring), a new family of algorithms against iid stochastic opponents that rely on a novel way of the usage of past observations. Our algorithms maintain a confidence ellipsoid in the space of outcome distributions, and update the ellipsoid based on observations following a Bayes-like update. Our approach enjoys better empirical performance and lower computational overhead; another crucial advantage is that we can incorporate prior information about the outcome distribution by means of an initial confidence ellipsoid. We prove near-optimal minimax expected regret bounds for our algorithm, and demonstrate its effectiveness on several partial monitoring problems on synthetic and real data. 2 2 Problem setup Partial monitoring is a repeated game where in every round, a learner chooses an action while the opponent chooses an outcome from some finite action and outcome sets. Then, the learner observes a feedback signal (from some given set of symbols) and suffers some loss, both of which are deterministic functions of the action and outcome chosen. In this paper we assume that the opponent chooses the outcomes in an iid stochastic manner. The goal of the learner is to minimize her cumulative loss. The following definitions and concepts are mostly taken from Bart´ok et al. [9]. An instance of partial monitoring is defined by the loss matrix L∈RN×M and the feedback table H ∈ΣN×M, where N and M are the cardinality of the action set and the outcome set, respectively, while Σ is some alphabet of symbols. That is, if learner chooses action i while the outcome is j, the loss suffered by the learner is L[i,j], and the feedback received is H[i,j]. For an action 1≤i≤N, let ℓi denote the column vector given by the ith row of L. Let ∆M denote the M-dimensional probability simplex. It is easy to see that for any p∈∆M, if we assume that the opponent uses p to draw the outcomes (that is, p is the opponent strategy), the expected loss of action i can be expressed as ℓ⊤ i p. We measure the performance of an algorithm with its expected regret, defined as the expected difference of the cumulative loss of the algorithm and that of the best fixed action in hindsight: RT = max 1≤i≤N T X t=1 (ℓIt−ℓi)⊤p, where T is some time horizon, It (t=1,...,T) is the action chosen in time step t, and p is the outcome distribution the opponent uses. In this paper, we also assume we have some prior knowledge about the outcome distribution in the form of a confidence ellipsoid: we are given a distribution p0 ∈∆M and a symmetric positive semidefinite covariance matrix Σ0∈RM×M such that the true outcome distribution p∗satisfies ∥p0−p∗∥Σ−1 0 = q (p0−p∗)⊤Σ−1 0 (p0−p∗)≤1. We use the term “confidence ellipsoid” even though our condition is not probabilistic; we do not assume that p∗is drawn from a Gaussian distribution before the game starts. On the other hand, the way we track p∗is derived by Bayes updates with a Gaussian conjugate prior, hence the name. We would also like to note that having the above prior knowledge is without loss of generality. For “large enough” Σ0, the whole probability simplex is contained in the confidence ellipsoid and thus partial monitoring without any prior information reduces to our setting. The following definition reveals how we use the loss matrix to recover the structure of a game. Definition 1 (Cell decomposition, Bart´ok et al. [9, Definition 2]). For any action 1≤i≤N, let Ci denote the set of opponent strategies for which action i is optimal: Ci= p∈∆M : ∀1≤j≤N,(ℓi−ℓj)⊤p≤0 . We call the set Ci the optimality cell of action i. Furthermore, we call the set of optimality cells {C1,...,CN} the cell decomposition of the game. Every cell Ci is a convex closed polytope, as it is defined by a linear inequality system. Normally, a cell has dimension M−1, which is the same as the dimensionality of the probability simplex. It might happen however, that a cell is of lower dimensionality. Another possible degeneracy is when two actions share the same cell. In this paper, for ease of presentation, we assume that these degeneracies do not appear. For an illustration of cell decomposition, see Figure 1(a). Now that we know the regions of optimality, we can define when two actions are neighbors. Intuitively, two actions are neighbors if their optimality cells are neighbors in the strong sense that they not only meet in “one corner”. Definition 2 (Neighbors, Bart´ok et al. [9, page 4]). Two actions i and j are neighbors, if the intersection of their optimality cells Ci∩Cj is an M−2-dimensional convex polytope. 3 p∗ C1 C2 C3 C4 C5 (a) Cell decomposition pt−1 p∗ (b) Before the update pt−1 p∗ pt (c) After the update Figure 1: (a) An example for a cell decomposition with M = 3 outcomes. Under the true outcome distribution p∗, action 3 is optimal. Cells C1 and C3 are neighbors, but C2 and C5 are not. (b) The current estimate pt−1 is far away from the true distribution, the confidence ellipsoid is large. (c) After updating, pt is closer to the truth, the confidence ellipsoid shrinks. To optimize performance, the learner’s primary goal is to find out which cell the opponent strategy lies in. Then, the learner can choose the action associated with that cell to play optimally. Since the feedback the learner receives is limited, this task of finding the optimal cell may be challenging. The next definition enables us to utilize the feedback table H. Definition 3 (Signal matrix, Bart´ok et al. [9, Definition 1]). Let {α1,α2,...,ασi}⊆Σ be the set of symbols appearing in row i of the feedback table H. We define the signal matrix Si∈{0,1}σi×M of action i as Si[k,j]=I(H[i,j]=αk). In words, Si is the indicator table of observing symbols α1,...,ασi under outcomes 1,...,M given that the action chosen is i. For an example, consider the case when the ith row of H is (a b a c). Then, Si= 1 0 1 0 0 1 0 0 0 0 0 1 ! . A very useful property of the signal matrix is that if we represent outcomes with M-dimensional unit vectors, then Si can be used as a linear transformation to arrive at the unit-vector representation of the observation. The following condition condition is key in distinguishing easy and hard games: Definition 4 (Local observability, Bart´ok et al. [9, Definition 3]). Let actions i and j be neighbors. These actions are said to be locally observable if ℓi −ℓj ∈ImS⊤ i ⊕ImS⊤ j . Furthermore, a game is locally observable if all of its neighboring action pairs are locally observable. Bart´ok et al. [9] showed that finite stochastic partial-monitoring problems that admit local observability have eΘ( √ T) minimax expected regret. In the following, we present our new algorithm family that achieves the same regret rate for locally observable games against stochastic opponents. 3 BPM: New algorithms for Partial Monitoring based on Bayes updates The algorithms we propose can be decomposed into two main building blocks: the first one keeps track of a belief about the true outcome distribution and provides us with a set of feasible actions in every round. The second one is responsible for selecting the action to play from this action set. Pseudocode for the algorithm family is shown in Algorithm 1. 3.1 Update Rule The method of updating the belief about the true outcome distribution (p∗) is based on the idea that we pretend that the outcomes are generated from a Gaussian distribution with covariance Σ = IM and unknown mean. We also pretend we have a Gaussian prior for tracking the mean. The parameters of this prior are denoted by p0 (mean) and Σ0 (covariance). In every time step, we perform a Gaussian Bayes-update using the observation received. 4 Algorithm 1 BPM input: L,H,p0,Σ0 initialization: Calculate signal matrices Si for t=1 to T do Use selection rule (cf., Sec. 3.2) to choose an action It Observe feedback Yt Update posterior: Σ−1 t =Σ−1 t−1+PIt and pt=Σt Σ−1 t−1pt−1+S⊤ It(SItS⊤ It)−1Yt ; end for Full-information case As a gentle start, we explain how the update rule would look like if we had full information about the outcome in each time step. The update in this case is identical with the standard Gaussian one-step update: Σt=Σt−1−Σt−1(Σt−1+I)−1Σt−1 or equiv. Σ−1 t =Σ−1 t−1+I, µt=Σt Σ−1 t−1µt−1+Xt or equiv. µt=µt−1+Σt(Xt−µt−1). Here we use subindex t−1 for the prior parameters and t for the posterior parameters in time step t, and denote by Xt the outcome (observed in this case), encoded by an M-dimensional unit vector. General case Moving away from the full-information case, we face the problem of not observing the outcome, only some symbol that is governed by the signal matrix of the action we chose and the outcome itself. If we denote, as above, the outcome at time step t by an M-dimensional unit vector Xt, then the observation symbol can be thought of as a unit vector given by Yt =SiXt, provided the chosen action is i. It follows that what we observe is a linear transformation of the sample from the outcome distribution. Following the Bayes update rule and assuming we chose action i at time step t, we derive the corresponding Gaussian posterior given that the likelihood of the observation is π(Y |p)∼N(Sip,SiS⊤ i ). After some algebraic manipulations we get that the posterior distribution is Gaussian with covariance Σt=(Σ−1 t−1+Pi)−1 and mean pt=Σt Σ−1 t−1pt−1+PiXt , where Pi=S⊤ i (SiS⊤ i )−1Si is the orthogonal projection to the image space of S⊤ i . Note that even though Xt is not observed, the update can be performed, since PiXi=S⊤ i (SiS⊤ i )−1SiXt=S⊤ i (SiS⊤ i )−1Yt. A significant advantage of this method of tracking the outcome distribution as opposed to keeping track of loss difference estimates (as done in previous works), is that feedback from one action can provide information about losses across all the actions. We believe that this property has a major role in the empirical performance improvement over existing methods. An important part in analyzing our algorithm is to show that, despite the fact that the outcome distribution is not Gaussian, the update tracks the true outcome distribution well. For an illustration of tracking the true outcome distribution with the above update, see Figures 1(b) and 1(c). 3.2 Selection rules For selecting actions given the posterior parameters, we propose two versions for the selection rule: 1. Draw a random sample p from the distribution N(pt−1,Σt−1), project the sample to the probability simplex, then choose the action that minimizes the loss for outcome distribution p. This rule is a close relative of Thompson-sampling. We call this version of the algorithm BPM-TS. 2. Use pt−1 and Σt−1 to build a confidence ellipsoid for p∗, enumerate all actions whose cells intersect with this ellipsoid, then choose the action that was chosen the fewest times so far (called BPM-LEAST). Our experiments demonstrate the performance of both versions. We analyze version BPM-LEAST. 5 4 Analysis We now analyze BPM-LEAST that uses the Gaussian updates, and considers a set of feasible actions based on the criterion that an action is feasible if its optimality cell intersects with the ellipsoid ( p:∥p−pt∥Σ−1 t ≤1+ r 1 2NlogMT ) . From these feasible actions, it picks the one that has been chosen the fewest times up to time step t. For this version of the algorithm, the following regret bound holds. Theorem 1. Given a locally observable partial-monitoring problem (L,H) with prior information p0,Σ0, the algorithm BPM-LEAST achieves expected regret RT ≤C p TNlog(MT), where C is some problem-dependent constant. The above constant C depends on two main factors, both of them related to the feedback structure. The first one is the sum of the smallest eigenvalues of SiS⊤ i for every action i. The second is related to the local observability condition. As the condition says, for every neighboring action pairs i and j, ℓi−ℓj ∈ImS⊤ i ⊕ImS⊤ j . This means that there exist vij and vji vectors such that ℓi−ℓj =S⊤ i vij−S⊤ j vji. The constant depends on the maximum 2-norm of these vij vectors. The proof of the theorem is deferred to the supplementary material. In a nutshell, the proof is divided into two main parts. First we need to show that the update rule—even though the underlying distribution is not Gaussian—serves as a good tool for tracking the true outcome distribution. After some algebraic manipulations, the problem reduces to a finding a high probability upper bound for norms of weighted sums of noise vectors. To this end, we used the martingale version of the matrix Hoeffding inequality [12, Theorem 1.3]. Then, we need to show that the confidence ellipsoid shrinks fast enough that if we only choose actions whose cell intersect with the ellipsoid, we do not suffer a large regret. In the core of proving this, we arrive at a term where we need to upper bound ∥ℓi −ℓj∥Σt, for some neighboring action pairs (i,j), and we show that due to local observability and the speed at which the posterior covariance shrinks, this term can be upper bounded by roughly 1/ √ t. 5 Experiments First, we run extensive evaluations of BPM on various synthetic datasets and compare the performance against CBP [10] and FeedExp3 [7]. The datasets used in the simulated experiments are identical to the ones used by Bart´ok et al. [10] and thus allow us to benchmark against the current state of the art. We also provide results of BPM on a dataset that was collected by Singla and Krause [13] from real interactions with many users on the Amazon Mechanical Turk (AMT) [14] crowdsourcing platform. We present the details of the datasets used and the summarize our results and findings in this section. 5.1 Implementation Details In order to implement BPM, we made the following implementation choices: 1. To use BPM-LEAST (see Section 3.2), we need to recover the current feasible actions. We do so by sampling multiple (10000) times from concentric Gaussian ellipsoids centred at the current mean (pt) and collect feasible actions based on which cells the samples lie in. We resort to sampling for ease of implementation because otherwise we deal with the problem of finding the intersection between an ellipsoid and a simplex in M-dimensional space. 2. To implement BPM-TS, we draw p from the distribution N(pt−1,Σt−1). We then project it back to the simplex to obtain a probability distribution on the outcome space. Primarily due to sampling, both our algorithms are computationally more efficient than the existing approaches. In particular, BPM-TS is ideally suited for real world tasks as it is several orders of magnitude faster than existing algorithms during all our experiments. In each iteration, BPM-TS only needs to draw one sample from a multivariate gaussian and does not need any cell decompositions or expensive computations to obtain high dimensional intersections. 6 0 2 4 6 8 10 0 5 10 15 20 25 30 35 40 Time Steps × 105 Minimax Regret × 103 FeedExp CBP BPM−Least BPM−TS (a) Minimax (easy) 0 2 4 6 8 10 0 10 20 30 40 0 10 20 30 Time Steps × 105 Minimax Regret × 104 FeedExp CBP BPM−Least BPM−TS (b) Minimax (hard) 0 2.5 5 7.5 10 0 5 10 Time Steps × 105 Regret × 103 misspec. p0,tight Σ0 accurate p0,wide Σ0 accurate p0, tight Σ0 misspec. p0,wide Σ0 (c) Effects of priors 0 2.5 5 7.5 10 0 2 4 6 8 10 0 2 4 6 8 Time Steps × 105 Regret × 103 CBP BPM−Least BPM−TS FeedExp (d) Single opponent (easy). 0 2 4 6 8 10 0 2 4 6 8 10 Time Steps × 105 Regret × 103 FeedExp CBP BPM−Least BPM−TS (e) Single opponent (hard). 0 0.5 1 1.5 2 2.5 3 0 2 4 6 8 10 12 14 16 18 20 Time Steps × 105 Regret × 103 FeedExp CBP BPM−Least BPM−TS (f) Real data (dynamic pricing). Figure 2: (a,b,d,e) Comparing BPM on the locally non-observable game ((a,d) benign opponent; (c,e) hard opponent). Hereby, (a,b) show the pointwise maximal regret over 15 scenarios, and (d,e) show the regret against a single opponent strategy. (c) shows the effect of a misspecified prior. (f) is the performance of the algorithms on the real dynamic pricing dataset. 5.2 Simulated dynamic pricing games Dynamic pricing is a classic example of partial monitoring (see the introduction), and we compare the performance of the algorithms on the small but not locally observable game described in Bart´ok et al. [10]. The loss matrix and feedback tables for the dynamic pricing game are given by: L= 0 1 ··· N−1 c 0 ··· N−2 ... ... ... ... c ··· c 0 ; H = y y ··· y n y ··· y ... ... ... ... n ··· n y . Recall that the game models a repeated interaction of a seller with buyers in a market. Each buyer can either buy the product at the price (signal “y”) or deny the offer (signal “n”). The corresponding loss to the seller is either a known constant c (representing opportunity cost) or the difference between offered price and the outcome of the customer’s latent valuation of the product (willingness-to-pay). A similar game models procurement processes as well. Note that this game does not satisfy local observability. While our theoretical results require this condition, in practice, if the opponent does not intentionally select harsh regions on the simplex, BPM remains applicable. Under this setting, expected individual regret is a reasonable measure of performance. That is, we measure the expected regret for fixed opponent strategies. We also consider the minimax expected regret, which measures worst-case performance (pointwise maximum) against multiple opponent strategies. Benign opponent While the dynamic pricing game is not locally observable in general, certain opponent strategies are easier to compete with than others. Specifically, if the stochastic opponent chooses an outcome distribution that is away from the intersection of the cells that do not have local observability, the learning happens in “non-dangerous” or benign regions. We present results under this setting for simulated dynamic pricing with N =M =5. The results shown in Figures 2(a) and 2(d) illustrate the benefits of both variants of BPM over previous approaches. We achieve an order of magnitude reduction in the regret suffered w.r.t. both the minimax and the individual regret. 7 Harsh opponent For the same problem, with opponent chooses close to the boundary of the cells of two non-locally observable actions, the problem becomes harder. Still, BPM dramatically outperforms the baselines and suffers very little regret as shown in Figures 2(b) and 2(e). Effect of the prior We study the effects of a misspecified prior in Figure 2(c). As long as the initial confidence interval specified by the prior covariance is large enough to contain the opponent’s distribution, an incorrectly specified prior mean does not have an adverse effect on the performance of BPM. As expected, if the prior confidence ellipse used by BPM does not contain the opponent’s outcome distribution, however, the regret grows linear in time. Further, if the prior is very informative (accurately specified prior mean and tight confidence ellipse), very little regret is suffered. 5.3 Results on real data Dataset description We simulate a procurement game based on real data. Parameter estimation was done by posting a Human Intelligence Task (HIT) on the Amazon Mechanical Turk (AMT) platform. Motivated by an application in viral marketing, users were asked about the price they would accept for (hypothetically) letting us post promotional material to their friends on a social networking site. The survey also collected features like age, geographic region, number of friends in the social network, activity levels (year of joining, time spent per day etc.). Note that since the HIT was just a survey and the questions were about a hypothetical scenario, participants had no incentives to misreport their responses. Complete responses were collected from approx. 800 participants. See [13] for more details. The procurement game We simulate a procurement auction by playing back these responses offline. The game is very similar in structure to dynamic pricing, with the optimal action being the best fixed price that maximized the marketer’s value or equivalently, minimized the loss. We sampled iid from the survey data and perturbed the samples slightly to simulate a stream of 300000 potential users. At each iteration, we simulate a user with a private valuation generated as a function of her attributes. We discretized the offer prices and the private valuations to be one of 11 values and set the opportunity cost of losing a user due to low pricing to be 0.5. Thus we recover a partial monitoring game with 11 actions and 11 outcomes with a 0/1 feedback matrix. Results We present the results of our evaluation on this dataset in Figure 2(f). Notice that although the game is not locally observable, the outcome distribution does not seem to be in a difficult region of the cell decomposition as the adaptive algorithms (CBP and both versions of BPM) perform well. We note that the total regret suffered by BPM-LEAST is a factor of 10 lower than the regret achieved by CBP on this dataset. The plots are averaged over 30 runs of the competing algorithms on the stream. To the best of our knowledge, this is the first time partial monitoring has been evaluated on a real world problem of this size. 6 Conclusions and future work We introduced a new family of algorithms for locally observable partial-monitoring problems against stochastic opponents. We also enriched the model of partial monitoring with the possibility of incorporating prior information about the outcome distribution in the form of a confidence ellipsoid. The new insight of our approach is that instead of tracking loss differences, we explicitly track the true outcome distribution. This approach not only eases computational overhead but also helps achieve low regret by being able to transfer information between actions. In particular, BPM-TS runs orders of magnitude faster than any existing algorithms, opening the path for the model of partial monitoring to be applied on realistic settings involving large numbers of actions and outcomes. Future work includes extending our method for adversarial opponents. Bart´ok [11] already uses the idea of tracking the true outcome distribution with the help of a confidence parallelotope, which is rather close to our approach, but has the same shortcomings as other algorithms that track loss differences: it can not transfer information between actions. Extending our results to problems with large action and outcome spaces is also an important direction: if we have some prior information about the similarities between outcomes and/or actions, we have a chance for a reasonable regret. Acknowledgments This research was supported in part by SNSF grant 200021 137971, ERC StG 307036 and a Microsoft Research Faculty Fellowship. 8 References [1] V. G. Vovk. Aggregating strategies. In COLT, pages 371–386, 1990. [2] Nick Littlestone and Manfred K. Warmuth. The weighted majority algorithm. Inf. Comput., 108 (2):212–261, 1994. [3] Jean-Yves Audibert and S´ebastien Bubeck. Minimax policies for adversarial and stochastic bandits. In COLT, 2009. [4] Peter Auer, Nicol`o Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. The nonstochastic multiarmed bandit problem. SIAM J. Comput., 32(1):48–77, 2002. [5] Yisong Yue, Josef Broder, Robert Kleinberg, and Thorsten Joachims. The K-armed dueling bandits problem. Journal of Computer and System Sciences, 78(5):1538–1556, 2012. [6] Nir Ailon, Thorsten Joachims, and Zohar Karnin. Reducing dueling bandits to cardinal bandits. arXiv preprint arXiv:1405.3396, 2014. [7] Antonio Piccolboni and Christian Schindelhauer. Discrete prediction games with arbitrary feedback and loss. In COLT/EuroCOLT, pages 208–223, 2001. [8] Nicol`o Cesa-Bianchi, G´abor Lugosi, and Gilles Stoltz. Regret minimization under partial monitoring. Math. Oper. Res., 31(3):562–580, 2006. [9] G´abor Bart´ok, D´avid P´al, and Csaba Szepesv´ari. Minimax regret of finite partial-monitoring games in stochastic environments. Journal of Machine Learning Research - Proceedings Track (COLT), 19:133–154, 2011. [10] G´abor Bart´ok, Navid Zolghadr, and Csaba Szepesv´ari. An adaptive algorithm for finite stochastic partial monitoring. In Proceedings of the 29th International Conference on Machine Learning, ICML 2012, Edinburgh, Scotland, UK, June 26 - July 1, 2012, 2012. [11] G´abor Bart´ok. A near-optimal algorithm for finite partial-monitoring games against adversarial opponents. In COLT 2013 - The 26th Annual Conference on Learning Theory, June 12-14, 2013, Princeton University, NJ, USA, pages 696–710, 2013. [12] Joel A Tropp. User-friendly tail bounds for sums of random matrices. Foundations of Computational Mathematics, 12(4):389–434, 2012. [13] Adish Singla and Andreas Krause. Truthful incentives in crowdsourcing tasks using regret minimization mechanisms. In International World Wide Web Conference (WWW), 2013. [14] Amazon Mechanical Turk platform. URL https://www.mturk.com. 9
|
2014
|
17
|
5,258
|
Mind the Nuisance: Gaussian Process Classification using Privileged Noise Daniel Hern´andez-Lobato Universidad Aut´onoma de Madrid Madrid, Spain daniel.hernandez@uam.es Viktoriia Sharmanska IST Austria Klosterneuburg, Austria vsharman@ist.ac.at Kristian Kersting TU Dortmund Dortmund, Germany first.last@cs.tu-dortmund.de Christoph H. Lampert IST Austria Klosterneuburg, Austria chl@ist.ac.at Novi Quadrianto SMiLe CLiNiC, University of Sussex Brighton, United Kingdom n.quadrianto@sussex.ac.uk Abstract The learning with privileged information setting has recently attracted a lot of attention within the machine learning community, as it allows the integration of additional knowledge into the training process of a classifier, even when this comes in the form of a data modality that is not available at test time. Here, we show that privileged information can naturally be treated as noise in the latent function of a Gaussian process classifier (GPC). That is, in contrast to the standard GPC setting, the latent function is not just a nuisance but a feature: it becomes a natural measure of confidence about the training data by modulating the slope of the GPC probit likelihood function. Extensive experiments on public datasets show that the proposed GPC method using privileged noise, called GPC+, improves over a standard GPC without privileged knowledge, and also over the current state-of-the-art SVM-based method, SVM+. Moreover, we show that advanced neural networks and deep learning methods can be compressed as privileged information. 1 Introduction Prior knowledge is a crucial component of any learning system as without a form of prior knowledge learning is provably impossible [1]. Many forms of integrating prior knowledge into machine learning algorithms have been developed: as a preference of certain prediction functions over others, as a Bayesian prior over parameters, or as additional information about the samples in the training set used for learning a prediction function. In this work, we rely on the last of these setups, adopting Vapnik and Vashist’s learning using privileged information (LUPI), see e.g. [2, 3]: we want to learn a prediction function, e.g. a classifier, and in addition to the main data modality that is to be used for prediction, the learning system has access to additional information about each training example. This scenario has recently attracted considerable interest within the machine learning community because it reflects well the increasingly relevant situation of learning as a service: an expert trains a machine learning system for a specific task on request from a customer. Clearly, in order to achieve the best result, the expert will use all the information available to him or her, not necessarily just the 1 information that the system itself will have access to during its operation after deployment. Typical scenarios for learning as a service include visual inspection tasks, in which a classifier makes realtime decisions based on the input from its sensor, but at training time, additional sensors could be made use of, and the processing time per training example plays less of a role. Similarly, a classifier built into a robot or mobile device operates under strong energy constraints, while at training time, energy is less of a problem, so additional data can be generated and made use of. A third scenario is when the additional data is confidential, as e.g. in health care applications. Specifically, a diagnosis system may be improved when more information is available at training time, e.g., specific blood tests, genetic sequences, or drug trials, for the subjects that form the training set. However, the same data may not be available at test time, as obtaining it could be impractical, unethical, or illegal. We propose a novel method for using privileged information based on the framework of Gaussian process classifiers (GPCs). The privileged data enters the model in form of a latent variable, which modulates the noise term of the GPC. Because the noise is integrated out before obtaining the final model, the privileged information is only required at training time, not at prediction time. The most interesting aspect of the proposed model is that by this procedure, the influence of the privileged information becomes very interpretable: its role is to model the confidence that the GPC has about any training example, which can be directly read off from the slope of the probit likelihood. Instances that are easy to classify by means of their privileged data cause a faster increasing probit, which means the GP trusts the training example and tried to fit it well. Instances that are hard to classify result in a slowly increasing slope, so that the GPC considers them less reliable and does not put a lot of effort in fitting their label well. Our experiments on multiple datasets show that this procedure leads not just to more interpretable models, but also to better prediction accuracy. Related work: The LUPI framework was originally proposed by Vapnik and Vashist [2], inspired by a thought-experiment: when training a soft-margin SVM, what if an oracle would provide us with the optimal values of the slack variables? As it turns out, this would actually provably reduce the amount of training data needed, and consequently, Vapnik and Vashist proposed the SVM+ classifier that uses privileged data to predict values for the slack variables, which led to improved performance on several categorisation tasks and found applications, e.g., in finance [4]. This setup was subsequently improved, by a faster training algorithm [5], better theoretical characterisation [3], and it was generalised, e.g., to the learning to rank setting [6], clustering [7], metric learning [8] and multi-class data classification [9]. Recently, however, it was shown that the main effect of the SVM+ procedure is to assign a data-dependent weight to each training example in the SVM objective [10]. The proposed method, GPC+, constitutes the first Bayesian treatment of classification using privileged information. The resulting privileged noise approach is related to input-modulated noise commonly done in the regression task, where several Bayesian treatments of heteroscedastic regression using GPs have been proposed. Since the predictive density and marginal likelihood are no longer analytically tractable, most works deal with approximate inference, i.e., techniques such as Markov Chain Monte Carlo [11], maximum a posteriori [12], and variational Bayes [13]. To our knowledge, however, there is no prior work on heteroscedastic classification using GPs — we will elaborate the reasons in Section 2.1 — and this work is the first to develop approximate inference based on expectation propagation for the heteroscedastic noise case in the context of classification. 2 GPC+: Gaussian process classification with privileged noise For self-consistency we first review the GPC model [14] with an emphasis on the noise-corrupted latent Gaussian process view. Then, we show how to treat privileged information as heteroscedastic noise in this process. An elegant aspect of this view is how the privileged noise is able to distinguish between easy and hard samples and to re-calibrate the uncertainty on the class label of each instance. 2.1 Gaussian process classifier with noisy latent process Consider a set of N input-output data points or samples D = {(x1, y1), . . . , (xN, yN)} ⊂Rd × {0, 1}. Assume that the class label yi of the sample xi has been generated as yi = I[ ˜f(xi) ≥0 ], where ˜f(·) is a noisy latent function and I[·] is the Iverson’s bracket notation, i.e., I[ P ] = 1 when the condition P is true, and 0 otherwise. Induced by the label generation process, we adopt the 2 following form of likelihood function for ˜f = ( ˜f(x1), . . . , ˜f(xN))⊤: Pr(y|˜f, X = (x1, . . . , xN)⊤) = YN n=1 Pr(yn = 1|xn, ˜f) = YN n=1 I[ ˜f(xn) ≥0 ], (1) where ˜f(xn) = f(xn) + ϵn with f(xn) being the noise-free latent function. The noise term ϵn is assumed to be independent and normally distributed with zero mean and variance σ2, that is ϵn ∼N(ϵn|0, σ2). To make inference about ˜f(xn), we need to specify a prior over this function. We proceed by imposing a zero mean Gaussian process prior [14] on the noise-free latent function, that is f(xn) ∼GP(0, k(xn, ·)) where k(·, ·) is a positive-definite kernel function [15] that specifies prior properties of f(·). A typical kernel function that allows for non-linear smooth functions is the squared exponential kernel kf(xn, xm) = θ exp(−1 2l ∥xn −xm∥2 ℓ2), where θ controls the prior amplitude of f(·) and l controls its prior smoothness. The prior and the likelihood are combined using Bayes’ rule to get the posterior of ˜f(·). Namely, Pr(˜f|X, y) = Pr(y|˜f, X)Pr(˜f)/Pr(y|X). We can simplify the above noisy latent process view by integrating out the noise term ϵn and writing down the individual likelihood at sample xn in terms of the noise-free latent function f(·). Namely, Pr(yn = 1|xn, f) = Z I[ ˜f(xn) ≥0]N(ϵn|0, σ2)dϵn = Φ(0,σ2)(f(xn)), (2) where we have used that ˜f(xn) = f(xn) + ϵn and Φ(µ,σ2)(·) is a Gaussian cumulative distribution function (CDF) with mean µ and variance σ2. Typically the standard Gaussian CDF is used, that is Φ(0,1)(·), in the likelihood of (2). Coupled with a Gaussian process prior on the latent function f(·), this results in the widely adopted noise-free latent Gaussian process view with probit likelihood. The equivalence between a noise-free latent process with probit likelihood and a noisy latent process with step-function likelihood is widely known [14]. It is also widely accepted that the function ˜f(·) (or the functionf(·)) is a nuisance function as we do not observe its value and its sole purpose is for a convenient formulation of the model [14]. However, in this paper, we show that by using privileged information as the noise term, the latent function ˜f now plays a crucial role. The latent function with privileged noise adjusts the slope transition in the Gaussian CDF to be faster or slower corresponding to more certainty or more uncertainty about the samples in the original input space. 2.2 Introducing privileged information into the nuisance function In the learning under privileged information (LUPI) paradigm [2], besides input data points {x1, . . . , xN} and associated labels {y1, . . . , yN}, we are given additional information x∗ n ∈Rd∗ about each training instance xn. However, this privileged information will not be available for unseen test instances. Our goal is to exploit the additional data x∗to influence our choice of the latent function ˜f(·). This needs to be done while making sure that the function does not directly use the privileged data as input, as it is simply not available at test time. We achieve this naturally by treating the privileged information as a heteroscedastic (input-dependent) noise in the latent process. Our classification model with privileged noise is then as follows: Likelihood model : Pr(yn = 1|xn, ˜f) = I[ ˜f(xn) ≥0 ] , where xn ∈Rd (3) Assume : ˜f(xn) = f(xn) + ϵn (4) Privileged noise model : ϵn i.i.d. ∼N(ϵn|0, z(x∗ n) = exp(g(x∗ n))) , where x∗ n ∈Rd∗ (5) GP prior model : f(xn) ∼GP(0, kf(xn, ·)) and g(x∗ n) ∼GP(0, kg(x∗ n, ·)). (6) In the above, the function exp(·) is needed to ensure positivity of the noise variance. The term kg(·, ·) is a positive-definite kernel function that specifies the prior properties of another latent function g(·), which is evaluated in the privileged space x∗. Crucially, the noise term ϵn is now heteroscedastic, that is, it has a different variance z(x∗ n) at each input point xn. This is in contrast to the standard GPC approach discussed in Section 2.1 where the noise term is homoscedastic, ϵn ∼N(ϵn|0, z(x∗ n) = σ2). An input-dependent noise term is very common in regression tasks with continuous output values yn ∈R, resulting in heteroscedastic regression models, which have been proven more flexible in numerous applications as already touched upon in the section on related work. However, to our knowledge, there is no prior work on heteroscedastic classification models. This is not surprising as the nuisance view of the latent function renders a flexible input-dependent noise point-less. 3 −10 −5 0 5 10 f(xn) 0.0 0.2 0.4 0.6 0.8 1.0 Φ(0,exp(g(x∗n))(f(xn)) 0.84 0.58 0.98 1 exp(g(x∗ n)) = 1.0 exp(g(x∗ n)) = 5.0 exp(g(x∗ n)) = 0.5 −2 −1 0 1 2 0.0 0.2 0.4 0.6 0.8 1.0 Posterior mean of for a difficult instance Posterior mean of for an easy instance AwA (DeCAF) / Chimpanzee v. Giant Panda Figure 1: Effects of privileged noise on the nuisance function. (Left) On synthetic data. Suppose for an input xn, the latent function value is f(xn) = 1. Now also assume that the associated privileged information x∗ n for the n-th data point deems the sample as difficult, say exp(g(x∗ n)) = 5.0. Then the likelihood will reflect this uncertainty Pr(yn = 1|f, g, xn, x∗ n) = 0.58. In contrast, if the associated privileged information considers the sample as easy, say e.g. exp(g(x∗ n)) = 0.5, the likelihood is very certain Pr(yn = 1|f, g, xn, x∗ n) = 0.98. (Right) On real data taken from our experiments in Sec. 4. The posterior means of the Φ(·) function (solid) and its 1-standard deviation confidence interval (dash-dot) for easy (blue) and difficult (black) instances of the Chimpanzee v. Giant Panda binary task on the Animals with Attributes (AwA) dataset. (Best viewed in color). In the context of privileged information heteroscedastic classification is a very sensible idea, which is best illustrated when investigating the effect of privileged information in the equivalent formulation of a noise free latent process, i.e., when one integrates out the privileged input-dependent noise term: Pr(yn = 1|xn, x∗ n, f, g) = Z I[ ˜f(xn) ≥0 ]N(ϵn|0, exp(g(x∗ n))dϵn = Φ(0,exp(g(x∗n)))(f(xn)) = Φ(0,1)(f(xn)/ p exp(g(x∗n)). (7) This equation shows that the privileged information adjusts the slope transition of the Gaussian CDF through the latent function g(·). For difficult samples the latent function g(·) will be high, the slope transition will be slower, and thus more uncertainty will be in the likelihood Pr(yn = 1|xn, x∗ n, f, g). For easy samples, however, g(·) will be low, the slope transition will be faster, and thus less uncertainty will be in the likelihood term. This behaviour is illustrated in Figure 1. For non-informative samples in the privileged space, the value of g for those samples should be equal to a global noise value, as in a standard GPC. Thus, privileged information should in principle never hurt. Proving this theoretically is, however, an interesting and challenging research direction. Experimentally, however, we observe in the section on experiments the scenario described. 2.3 Posterior and prediction on test data Define g = (g(x∗ 1), . . . , g(x∗ n))T and X∗ = (x∗ 1, . . . , x∗ n)T. Given the likelihood Pr(y|X, X⋆, f, g) = QN n=1 Pr(yn = 1|f, g, xn, x∗ n) with the individual term Pr(yn|f, g, xn, x∗ n) given in (7) and the Gaussian process priors on functions, the posterior for f and g is: Pr(f, g|y, X, X⋆) = Pr(y|X, X⋆, f, g)Pr(f)Pr(g) Pr(y|X, X⋆) , (8) where Pr(y|X, X⋆) can be maximised with respect to a set of hyper-parameter values such as the amplitude θ and the smoothness l of the kernel functions [14]. For a previously unseen test point xnew ∈Rd, the predictive distribution for its label ynew is given as: Pr(ynew = 1|y, X, X⋆) = Z I[ ˜f(xnew) ≥0 ]Pr(fnew|f)Pr(f, g|y, X, X⋆)dfdgdfnew , (9) where Pr(fnew|f) is a Gaussian conditional distribution. We note that in (9) we do not consider the privileged information x⋆ new associated to xnew. The interpretation is that we consider homoscedastic 4 noise at test time. This is a reasonable approach as there is no additional information for increasing or decreasing our confidence in the newly observed data xnew. Finally, we predict the label for a test point via Bayesian decision theory: the label being predicted is the one with the largest probability. 3 Expectation propagation with numerical quadrature Unfortunately, as for most interesting Bayesian models, inference in the GPC+ model is very challenging. Already in the homoscedastic case, the predictive density and marginal likelihood are not tractable. Here, we therefore adapt Minka’s expectation propagation (EP) [16] with numerical quadrature for approximate inference. Our choice is supported on the fact that EP is the preferred method for approximate inference in GPCs, in terms of accuracy and computational cost [17, 18]. Consider the joint distribution of f, g and y, Pr(y|X, X∗, f, g)Pr(f)Pr(g), where Pr(f) and Pr(g) are Gaussian process priors and the likelihood Pr(y|X, X∗, f, g) equals QN n=1 Pr(yn|xn, x∗ n, f, g), with Pr(yn|xn, x∗ n, f, g) given by (7). EP approximates each non-normal factor in this distribution by an un-normalised bi-variate normal distribution of f and g (we assume independence between f and g). The only non-normal factors are those of the likelihood, which are approximated as: Pr(yn|xn, x∗ n, f, g) ≈γn(f, g) = znN(f(xn)|mf, vf)N(g(x∗ n)|mg, vg) , (10) where the parameters with the super-script are to be found by EP. The posterior approximation Q computed by EP results from normalising with respect to f and g the EP approximate joint. That is, Q is obtained by replacing each likelihood factor by the corresponding approximate factor γn: Pr(f, g|X, X∗, y) ≈Q(f, g) := Z−1[ YN n=1 γ(f, g)]Pr(f)Pr(g) , (11) where Z is a normalisation constant that approximates the model evidence, Pr(y|X, X∗). The normal distribution belongs to the exponential family of probability distributions and is closed under the product and division. It is hence possible to show that Q is the product of two multi-variate normals [19]. The first normal approximates the posterior for f and the second the posterior for g. EP tries to fix the parameters of γn so that it is similar to the exact factor Pr(yn|xn, x∗ n, f, g) in regions of high posterior probability [16]. For this, EP iteratively updates each γn until convergence to minimise KL Pr(yn|xn, x⋆ n, f, g)Qold/Zn||Q , where Qold is a normal distribution proportional to hQ n′̸=n γn′ i Pr(f)Pr(g) with all variables different from f(xn) and g(x∗ n) marginalised out, Zn is simply a normalisation constant and KL(·||·) denotes the Kullback-Leibler divergence between probability distributions. Assume Qnew is the distribution minimising the previous divergence. Then, γn ∝Qnew/Qold and the parameter zn of γn is fixed to guarantee that γn integrates the same as the exact factor with respect to Qold. The minimisation of the KL divergence involves matching expected sufficient statistics (mean and variance) between Pr(yn|xn, x⋆ n, f, g)Qold/Zn and Qnew. These expectations can be obtained from the derivatives of log Zn with respect to the (natural) parameters of Qold [19]. Unfortunately, the computation of log Zn in closed form is intractable. We show here that it can be approximated by a one dimensional quadrature. Denote by mf, vf, mg and vg the means and variances of Qold for f(xn) and g(x∗ n), respectively. Then, Zn = Z Φ(0,1) ynmf/ q vf + exp(g(x∗n)) N(g(x∗ n)|mg, vg)dg(x∗ n) . (12) Thus, EP only requires five quadratures to update each γn. One to compute log Zn and four extras to compute its derivatives with respect to mf, vf, mg and vg. After convergence, Q can be used to approximate predictive distributions and the normalisation constant Z can be maximised to find good values for the model’s hyper-parameters. In particular, it is possible to compute the gradient of Z with respect to the parameters of the Gaussian process priors for f and g [19]. An R language implementation of GPC+ using EP for approximate inference is found in the supplementary material. 4 Experiments We investigate the performance of GPC+. To this aim we considered three types of binary classification tasks corresponding to different privileged information using two real-world datasets: Attribute Discovery and Animals with Attributes. We detail these experiments in turn in the following sections. 5 Methods: We compared our proposed GPC+ method with the well-established LUPI method based on SVM, SVM+ [5]. As a reference, we also fit standard GP and SVM classifiers when learning on the original space Rd (GPC and SVM baselines). For all four methods, we used a squared exponential kernel with amplitude parameter θ and smoothness parameter l. For simplicity, we set θ = 1.0 in all cases. There are two hyper-parameters in GPC (smoothness parameter l and noise variance σ2) and also two in GPC+ (smoothness parameters l of kernel kf(·, ·) and of kernel kg(·, ·)). In GPC and GPC+, we used type II-maximum likelihood for finding all hyper-parameters. SVM has two knobs, i.e., smoothness and regularisation, and SVM+ has four knobs, two smoothness and two regularisation parameters. In SVM we used a grid search guided by cross-validation to set all hyperparameters. However, this procedure was too expensive for finding the best parameters in SVM+. Thus, we used the performance on a separate validation set to guide the search. This means that we give a competitive advantage to SVM+ over the other methods, which do not use the validation set. Evaluation metric: To evaluate the performance of each method we used the classification error measured on an independent test set. We performed 100 repeats of all the experiments to get the better statistics of the performance and we report the mean and the standard deviation of the error. 4.1 Attribute discovery dataset The data set was collected from a website that aggregates product data from a variety of e-commerce sources and includes both images and associated textual descriptions [20]. The images and texts are grouped into 4 broad shopping categories: bags, earrings, ties, and shoes. We used 1800 samples from this dataset. We generated 6 binary classification tasks for each pair of the 4 classes with 200 samples for training, 200 samples for validation, and the rest of the samples for testing performance. Neural networks on texts as privileged information: We used images as the original domain and texts as the privileged domain. This setting was also explored in [6]. However, we used a different dataset because textual descriptions of the images used in [6] are sparse and contain duplicates. More precisely, we extracted more advanced text features instead of simple term frequency (TF) features. For the images representation, we extracted SURF descriptors [21] and constructed a codebook of 100 visual words using the k-means clustering. For the text representation, we extracted 200 dimensional continuous word-vectors using a neural network skip-gram architecture [22]1. To convert this word representation into a fixed-length sentence representation, we constructed a codebook of 100 word-vectors using again k-means clustering. We note that a more elaborate approach to transform word to sentence or document features has recently been developed [23], and we are planning to explore this in the future. We performed PCA for dimensionality reduction in the original and privileged domains and only kept the top 50 principal components. Finally, we standardised the data so that each feature had zero mean and unit standard deviation. The experimental results are summarised in Table 1. On average over 6 tasks, SVM with hinge loss outperforms GPC with probit likelihood. However, GPC+ significantly improves over GPC providing the best results on average. This clearly shows that GPC+ is able to employ the neural network textual representation as privileged information. In contrast, SVM+ produced the same result as SVM. We suspect this is due to the fact that that SVM has already shown strong performance on the original image space coupled with the difficulties of finding the best values of the four hyperparameters of SVM+. Keep in mind that in SVM+ we discretised the hyper-parameter search space over 625 (5 × 5 × 5 × 5) possible combination values and used a separate validation set to estimate the resulting prediction performance. 4.2 Animals with attributes (AwA) dataset The dataset was collected by querying image search engines for each of the 50 animals categories which have complimentary high level descriptions of their semantic properties such as shape, colour, or habitat information among others [24]. The semantic attributes per animal class were retrieved from a prior psychological study. We focused on the 10 categories corresponding to the test set of this dataset for which the predicted attributes are provided based on the probabilistic DAP model [24]. The 10 classes are: chimpanzee, giant panda, leopard, persian cat, pig, hippopotamus, humpback whale, raccoon, rat, seal, which have 6180 images associated in total. As in Section 4.1 and also in 1https://code.google.com/p/word2vec/ 6 Table 1: Average error rate in % (the lower the better) on the Attribute Discovery dataset over 100 repetitions. We used images as the original domain and neural networks word-vector representation on texts as the privileged domain. The best method for each binary task is highlighted in boldface. An average rank equal to one means that the corresponding method has the smallest error on the 6 tasks. GPC GPC+ (Ours) SVM SVM+ bags v. earrings 9.79±0.12 9.50±0.11 9.89±0.14 9.89±0.13 bags v. ties 10.36±0.16 10.03±0.15 9.44±0.16 9.47±0.13 bags v. shoes 9.66±0.13 9.22±0.11 9.31±0.12 9.29±0.14 earrings v. ties 10.84±0.14 10.56±0.13 11.15±0.16 11.11±0.16 earrings v. shoes 7.74±0.11 7.33±0.10 7.75±0.13 7.63±0.13 ties v. shoes 15.51±0.16 15.54±0.16 14.90±0.21 15.10±0.18 average error on each task 10.65±0.11 10.36±0.12 10.41±0.11 10.42±0.11 average ranking 3.0 1.8 2.7 2.5 [6], we generated 45 binary classification tasks for each pair of the 10 classes with 200 samples for training, 200 samples for validation, and the rest of samples for testing the predictive performance. Neural networks on images as privileged information: Deep learning methods have gained an increased attention within the machine learning and computer vision community over the recent years. This is due to their capability in extracting informative features and delivering strong predictive performance in many classification tasks. As such, we are interested to explore the use of deep learning based features as privileged information so that their predictive power can be used even if we do not have access to them at prediction time. We used the standard SURF features [21] with 2000 visual words as the original domain and the recently proposed DeCAF features [25] extracted from the activation of a deep convolutional network trained in a fully supervised fashion as the privileged domain. The DeCAF features have 4096 dimensions. All features are provided with the AwA dataset2. We again performed PCA for dimensionality reduction in the original and privileged domains and only kept the top 50 principal components, as well as standardised the data. Attributes as privileged information: Following the experimental setting of [6], we also used images as the original domain and attributes as the privileged domain. Images were represented by 2000 visual words based on SURF descriptors and attributes were in the form of 85 dimensional predicted attributes based on probabilistic binary classifiers [24]. As previously, we also performed PCA and kept the top 50 principal components in the original domain and standardised the data. The results of these experiments are shown in Figure 2 in terms of pairwise comparisons over 45 binary tasks between GPC+ and the main baselines, GPC and SVM+. The complete results with the error of each method GPC, GPC+, SVM, and SVM+ on each problem are relegated to the supplementary material. In contrast to the results on the attribute discovery dataset, on the AwA dataset it is clear that GPC outperforms SVM in almost all of the 45 binary classification tasks (see the supplementary material). The average error of GPC over 4500 (45 tasks and 100 repeats per task) experiments is much lower than SVM. On the AwA dataset, SVM+ can take advantage of privileged information – be it deep belief DeCAF features or semantic attributes – and shows significant performance improvement over SVM. However, GPC+ still shows the best overall results and further improves the already strong performance of GPC. As illustrated in Figure 1 (right), the privileged information modulates the slope of the probit likelihood function differently for easy and difficult examples: easy examples gain slope and hence importance whereas difficult ones lose importance in the classification. In this dataset we analysed our experimental results using the multiple dataset statistical comparison method described in [26]3. The results of the statistical tests are summarised in Figure 3. When DeCAF attributes are used as privileged information, there is statistical evidence supporting that GPC+ performs best among the four methods, while when the semantic attributes are used as privileged information, GPC+ still performs best but there is not enough evidence to reject that GPC+ performs comparable to GPC. 2http://attributes.kyb.tuebingen.mpg.de 3Note that we are not able to use this method on the results of the attribute discovery dataset in Table 1 because the number of methods compared (i.e., 4) is almost equal to the number of tasks or datasets (i.e., 6). 7 (DeCAF as privileged) (Attributes as privileged) Figure 2: Pairwise comparison of the proposed GPC+ method and main baselines is shown via the relative difference of the error rate (top: GPC+ versus GPC, bottom: GPC+ versus SVM+). The length of the 45 bars corresponds to relative difference of the error rate over 45 cases. Average error rates of each method on the AwA dataset across each of the 45 tasks are found in the supplementary material. (Best viewed in color). 1 2 3 4 SVM SVM+ GPC+ GPC Critical Distance 1 2 3 4 SVM SVM+ GPC+ GPC Critical Distance (DeCAF as privileged) (Attributes as privileged) Figure 3: Average rank (the lower the better) of the four methods and critical distance for statistically significant differences (see [26]) on the AwA dataset. An average rank equal to one means that particular method has the smallest error on the 45 tasks. Whenever the average ranks differ by more than the critical distance, there is statistical evidence (p-value < 10%) supporting a difference in the average ranks and hence in the performance. We also link two methods with a solid line if they are not statistically different from each other (p-value > 10%). When the DeCAF features are used as privileged information, there is statistical evidence supporting that GPC+ performs best among the four methods considered. When the attributes are used, GPC+ still performs best, but there is not enough evidence to reject that GPC+ performs comparable to GPC. 5 Conclusions and future work We presented the first treatment of the learning with privileged information paradigm under the Gaussian process classification (GPC) framework, and called it GPC+. In GPC+ privileged information is used in the latent noise layer, resulting in a data-dependent modulation of the slope of the likelihood. The training time of GPC+ is about twice times the training time of a standard Gaussian process classifier. The reason is that GPC+ must train two latent functions, f and g, instead of only one. Nevertheless, our results show that GPC+ is an effective way to use privileged information, which manifest itself in significantly better prediction accuracy. Furthermore, to our knowledge, this is the first time that a heteroscedastic noise term is used to improve GPC. We have also shown that recent advances in continuous word-vector neural networks representations [23] and deep convolutional networks for image representations [25] can be used as privileged information. For future work, we plan to extend the GPC+ framework to the multi-class case and to speed up computation by devising a quadrature-free expectation propagation method, similar to the ones in [27, 28]. Acknowledgement: D. Hern´andez-Lobato is supported by Direcci´on General de Investigaci´on MCyT and by Consejer´ıa de Educaci´on CAM (projects TIN2010-21575-C02-02, TIN2013-42351-P and S2013/ICE-2845). V. Sharmanska is funded by the European Research Council under the ERC grant agreement no 308036. References [1] D.H. Wolpert. The lack of a priori distinctions between learning algorithms. Neural computation, 8:1341– 1390, 1996. 8 [2] V. Vapnik and A. Vashist. A new learning paradigm: Learning using privileged information. Neural Networks, 22:544 – 557, 2009. [3] D. Pechyony and V. Vapnik. On the theory of learnining with privileged information. In Advances in Neural Information Processing Systems (NIPS), pages 1894–1902, 2010. [4] B. Ribeiro, C. Silva, A. Vieira, A. Gaspar-Cunha, and J.C. das Neves. Financial distress model prediction using SVM+. In International Joint Conference on Neural Networks (IJCNN), 2010. [5] D. Pechyony and V. Vapnik. Fast optimization algorithms for solving SVM+. In Statistical Learning and Data Science, 2011. [6] V. Sharmanska, N. Quadrianto, and C. H. Lampert. Learning to rank using privileged information. In International Conference on Computer Vision (ICCV), 2013. [7] J. Feyereisl and U. Aickelin. Privileged information for data clustering. Information Sciences, 194:4–23, 2012. [8] S. Fouad, P. Tino, S. Raychaudhury, and P. Schneider. Incorporating privileged information through metric learning. IEEE Transactions on Neural Networks and Learning Systems, 24:1086 – 1098, 2013. [9] V. Sharmanska, N. Quadrianto, and C. H. Lampert. Learning to transfer privileged information, 2014. arXiv:1410.0389 [cs.CV]. [10] M. Lapin, M. Hein, and B. Schiele. Learning using privileged information: SVM+ and weighted SVM. Neural Networks, 53:95–108, 2014. [11] P. W. Goldberg, C. K. I. Williams, and C. M. Bishop. Regression with input-dependent noise: A Gaussian process treatment. In Advances in Neural Information Processing Systems (NIPS), 1998. [12] N. Quadrianto, K. Kersting, M. D. Reid, T. S. Caetano, and W. L. Buntine. Kernel conditional quantile estimation via reduction revisited. In International Conference on Data Mining (ICDM), 2009. [13] M. L´azaro-Gredilla and M. K. Titsias. Variational heteroscedastic Gaussian process regression. In International Conference on Machine Learning (ICML), 2011. [14] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning). The MIT Press, 2006. [15] B. Scholkopf and A. J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge, MA, USA, 2001. [16] T. P. Minka. A Family of Algorithms for Approximate Bayesian Inference. PhD thesis, Massachusetts Institute of Technology, 2001. [17] H. Nickisch and C. E. Rasmussen. Approximations for Binary Gaussian Process Classification. Journal of Machine Learning Research, 9:2035–2078, 2008. [18] M. Kuss and C. E. Rasmussen. Assessing approximate inference for binary Gaussian process classification. Journal of Machine Learning Research, 6:1679–1704, 2005. [19] M. Seeger. Expectation propagation for exponential families. Technical report, Department of EECS, University of California, Berkeley, 2006. [20] T. L. Berg, A. C. Berg, and J. Shih. Automatic attribute discovery and characterization from noisy web data. In European Conference on Computer Vision (ECCV), 2010. [21] H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool. Speeded-up robust features (SURF). Computer Vision and Image Understanding, 110:346–359, 2008. [22] T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems (NIPS), 2013. [23] Q. V. Le and T. Mikolov. Distributed representations of sentences and documents. In International Conference on Machine Learning (ICML), 2014. [24] C. H. Lampert, H. Nickisch, and S. Harmeling. Attribute-based classification for zero-shot visual object categorization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36:453–465, 2014. [25] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. In International Conference on Machine Learning (ICML), 2014. [26] J. Demˇsar. Statistical comparisons of classifiers over multiple data sets. Journal of Machine Learning Research, 7:1–30, 2006. [27] J. Riihim¨aki, P. Jyl¨anki, and A. Vehtari. Nested Expectation Propagation for Gaussian Process Classification with a Multinomial Probit Likelihood. Journal of Machine Learning Research, 14:75–109, 2013. [28] D. Hern´andez-Lobato, J. M. Hern´andez-Lobato, and P. Dupont. Robust multi-class Gaussian process classification. In Advances in Neural Information Processing Systems (NIPS), 2011. 9
|
2014
|
170
|
5,259
|
On Integrated Clustering and Outlier Detection Lionel Ott University of Sydney lott4241@uni.sydney.edu.au Linsey Pang University of Sydney qlinsey@it.usyd.edu.au Fabio Ramos University of Sydney fabio.ramos@sydney.edu.au Sanjay Chawla University of Sydney sanjay.chawla@sydney.edu.au Abstract We model the joint clustering and outlier detection problem using an extension of the facility location formulation. The advantages of combining clustering and outlier selection include: (i) the resulting clusters tend to be compact and semantically coherent (ii) the clusters are more robust against data perturbations and (iii) the outliers are contextualised by the clusters and more interpretable. We provide a practical subgradient-based algorithm for the problem and also study the theoretical properties of algorithm in terms of approximation and convergence. Extensive evaluation on synthetic and real data sets attest to both the quality and scalability of our proposed method. 1 Introduction Clustering and outlier detection are often studied as separate problems [1]. However, it is natural to consider them simultaneously. For example, outliers can have a disproportionate impact on the location and shape of clusters which in turn can help identify, contextualize and interpret the outliers. Pelillo [2] proposed a game theoretic definition of clustering algorithms which emphasis the need for methods that require as little information as possible while being capable of dealing with outliers. The area of “robust statistics” studies the design of statistical methods which are less sensitive to the presence of outliers [3]. For example, the median and trimmed mean estimators are less sensitive to outliers than the mean. Similarly, versions of Principal Component Analysis (PCA) have been proposed [4, 5, 6] which are more robust against model mis-specification and outliers. An important primitive in the area of robust statistics is the notion of Minimum Covariance Determinant (MCD): Given a set of n multivariate data points and a parameter ℓ, the objective is to identify a subset of points which minimizes the determinant of the variance-covariance matrix over all subsets of size n−ℓ. The resulting variance-covariance matrix can be integrated into the Mahalanobis distance and used as part of a chi-square test to identify multivariate outliers [7]. In the theoretical computer science literature, similar problems have been studied in the context of clustering and facility location. For example, Chen [8] has considered and proposed a constant factor approximation algorithm for the k-median with outliers problem: Given n data points and parameters k and ℓ, the objective is to remove a set of ℓpoints such that the cost of k-median clustering on the remaining n −ℓpoints is minimized. Our model is similar to the one proposed by Charikar et. al. [9] who have used a primal-dual formulation to derive an approximation algorithm for the facility location with outlier problem. More recently, Chawla and Gionis [10] have proposed k-means--, a practical and scalable algorithm for the k-means with outlier problem. k-means-- is a simple extension of the k-means algorithm and is guaranteed to converge to a local optima. However, the algorithm inherits the weaknesses of the 1 classical k-means algorithm. These are: (i) the requirement of setting the number of clusters k and (ii) initial specification of the k centroids. It is well known that the choice of k and initial set of centroids can have a disproportionate impact on the result. In this paper we model clustering and outlier detection as an integer programming optimization task and then propose a Lagrangian relaxation to design a scalable subgradient-based algorithm. The resulting algorithm discovers the number of clusters and requires as input: the distance (discrepancy) between pairs of points, the cost of creating a new cluster and the number ℓof outliers to select. The remainder of the paper is structured as follows. In Section 2 we formally describe the problem as an integer program. In Section 3, we describe the Lagrangian relaxation and details of the subgradient algorithm. The approximation properties of the relaxation and the convergence of the subgradient algorithm are discussed in Section 4. Experiments on synthetic and real data sets are the focus of Section 5 before concluding with Section 6. The supplementary section derives an extension of the affinity propagation algorithm [11] to detect outliers (APOC) - which will be used for comparison. 2 Problem Formulation The Facility Location with Outliers (FLO) problem is defined as follows [9]. Given a set of data points with distances D = {dij}, the cluster creation costs ci and the number of outliers ℓ, we define the task of clustering and outlier detection as the problem of finding the assignments to the binary exemplar indicators yj, outlier indicators oi and point assignments xij that minimizes the following objective function: FLO ≡min X j cjyj + X i X j dijxij, (1) subject to xij ≤yj (2) oi + X j xij = 1 (3) X i oi = ℓ (4) xij, yj, oi ∈{0, 1}. (5) In order to obtain a valid solution a set of constraints have been imposed: • points can only be assigned to valid exemplars Eq. (2); • every point must be assigned to exactly one other point or declared an outlier Eq. (3); • exactly ℓoutliers have to be selected Eq. (4); • only integer solutions are allowed Eq. (5). These constraints describe the facility location problem with outlier detection. This formulation will allow the algorithm to select the number of clusters automatically and implicitly defines outliers as those points whose presence in the dataset has the biggest negative impact on the overall solution. The problem is known to be NP-hard and while approximation algorithms have been proposed, when distances are assumed to be a metric, there is no known algorithm which is practical, scalable, and comes with solution guarantees [9]. For example, a linear relaxation of the problem and a solution using a linear programming solver is not scalable to large data sets as the number of variables is O(n2). In fact we will show that the Lagrangian relaxation of the problem is exactly equivalent to a linear relaxation and the corresponding subgradient algorithm scales to large data sets, has a small memory footprint, can be easily parallelized, and does not require access to a linear programming solver. 3 Lagrangian Relaxation of FLO The Lagrangian relaxation is based on the following recipe and observations: (i) relax (or dualize) “tough” constraints of the original FLO problem by moving them to the objective; (ii) associate 2 a Lagrange multiplier (λ) with the relaxed constraints which intuitively captures the price of constraints not being satisfied; (iii) For any non-negative λ, FLO(λ) is a lower-bound on the FLO problem. As a function of λ, FLO(λ) is a concave but non-differentiable; (iv) Use a subgradient algorithm to maximize FLO(λ) as a function of λ in order to close the gap between the primal and the dual. More specifically, we relax the constraint oi + P j xij = 1 for each i and associate a Lagrange multiplier λi with each constraint. Rearranging the terms yields: FLO(λ) = min X i (1 −oi)λi | {z } outliers + X j cjyj + X i X j (dij −λi)xij | {z } clustering . (6) subject to xij ≤yi (7) X i oi = ℓ (8) 0 ≤xij, yj, oi ∈{0, 1} ∀i, j (9) We can now solve the relaxed problem with a heuristic finding valid assignments that attempt to minimize Eq. (6) without optimality guarantees [12]. The Lagrange multipliers λ act as a penalty incurred for constraint violations which we try to minimize. From Eq. (6) we see that the penalty influences two parts: outlier selection and clustering. The heuristic starts by selecting good outliers by designating the ℓpoints with largest λ as outliers, as this removes a large part of the penalty. For the remaining N −ℓpoints clustering assignments are found by setting xij = 0 for all pairs for which dij −λi ≥0. To select the exemplars we compute: µj = cj + X i:dij−λi<0 (dij −λi), (10) which represents the amortized cost of selecting point j as exemplar and assigning points to it. Thus, if µj < 0 we select point j as an exemplar and set yj = 1, otherwise we set yj = 0. Finally, we set xij = yj if dij −λi < 0. From this complete assignment found by the heuristic we compute a new subgradient st and update the Lagrangian multipliers λt as follows: st i = 1 − X j xij −oi (11) λt i = max(λt−1 i + θtsi, 0), (12) where θt is the step size at time t computed as θt = θ0 pow(α, t) α ∈(0, 1), (13) where pow(a, b) = ab. To obtain the final solution we repeat the above steps until the changes become small enough, at which point we extract a feasible solution. This is guaranteed to converge if a step function is used for which the following holds [12]: lim n→∞ n X t=1 θt = ∞ and lim t→∞θt = 0. (14) A high level algorithm description is given in Algorithm 1. 4 Analysis of Lagrangian Relaxation In this section, we analyze the solution obtained from using the Lagrangian relaxation (LR) method. Our analysis will have two parts. In the first part, we will show that the Lagrangian relaxation is exactly equivalent to solving the linear relaxation of the FLO problem. Thus if FLO(IP), FLO(LP) and FLO(LR) are the optimal value of integer program, linear relaxation and linear programming solution respectively, we will show that FLO(LR) = FLO(LP). In the second part, we will analyze the convergence rate of the subgradient method and the impact of outliers. 3 Algorithm 1: LagrangianRelaxation() Initialize λ0, x0, t while not converged do st ←ComputeSubgradient(xt−1) λt ←ComputeLambda(st) xt ←FLO(λt) (solve via heuristic) t ←t + 1 end A = 1 1 −1 −1 −1 −1 0 0 0 1 1 Figure 1: Visualization of the building blocks of the A matrix. The top left is a n2 × n2 identity matrix which is followed by n row stacked blocks of n × n negative identity matrices. To the right of those is another n2 × n block of zeros. The final row in the block matrix consists of n2 + n zeros followed by n ones. 4.1 Quality of the Lagrangian Relaxation Consider the constraint set L = {(x, y, o) ∈Zn2+2n|xij ≤yj ∧P i oi ≤ℓ∀i, j}. Then it is well known that the optimal value of FLO(LR) of the Lagrangian relaxation is equal to the cost of the following optimization problem [12]: min X j cjyj + X i X j xijdij (15) oi + X j xij = 1 (16) conv(L) (17) where conv(L) is the convex hull of the set L. We now show that L is integral and therefore conv(L) = {(x, y, o) ∈Rn2+2n|xij ≤yj ∧ X i oi ≤ℓ∀i, j} This in turn will imply that FLO(LR) = FLO(LP). In order to show that L is integral, we will establish that that the constraint matrix corresponding to the set L is totally unimodular (TU). For completeness, we recall several important definitions and theorems from integer program theory [12]: Definition 1. A matrix A is totally unimodular if every square submatrix of A, has determinant in the set {−1, 0, 1}. Proposition 1. Given a linear program: min{cT x : Ax ≥b, x ∈Rn +}, let b be the set of integer vectors for which the problem instance has finite value. Then the optimal solution has integral solutions if A is totally unimodular. An equivalent definition of total unimodularity (TU) and often easier to establish is captured in the following theorem. Theorem 1. Let A be a matrix. Then A is TU iff for any subset of rows X of A, there exists a coloring of rows of X, with 1 or -1 such that the weighted sum of every column (while restricting the sum to rows in X) is -1, 0 or 1. We are now ready to state and prove the main theorem in this section. 4 Theorem 2. The matrix corresponding to the constraint set L is totally unimodular. Proof. We need to consider the constraints xij ≤yj ∀i, j (18) n X i=1 oi ≤ℓ (19) We can express the above constraints in the form Au = b where u is the vector: u = [x11, . . . , x1n, . . . , xn1, . . . , xnn, y1, . . . , yn, o1, . . . , on]T (20) The block matrix A is of the form: A = I B 0 0 0 1 (21) Here I is an n2 × n2 identity matrix, B is stack of n matrices of size n × n where each element of the stack is a negative identity matrix, and 1 is an 1 × n block of 1′s. See Figure 1 for a detailed visualization. Now to prove that A is TU, we will use Theorem 1. Take any subset X of rows of A. Whether we color the rows of X by 1 or -1, the column sum (within X) of a column of I will be in {−1, 0, 1}. A similar argument holds for columns of the block matrix 1. Now consider the submatrix B. We can express X as X = ∪n i=1,i∈B(X,:)Xi (22) where each Xi = {r ∈X|X(r, i) = −1}. Given that B is a stack of negative diagonal matrices, Xi ∩Xj = ∅for i ̸= j. Now consider a column j of B. If Xj has even number of −1′s, then split the elements of Xj evenly and color one half as 1 and the other as −1. Then the sum of column j (for rows in X) will be 0. On the other hand, if another set of rows Xk has odd number of −1, color the rows of Xk alternatively with 1 and −1. Since Xj and Xk are disjoint their colorings can be carried out independently. Then the sum of column j will be 1 or −1. Thus we satisfy the condition of Theorem 1 and conclude that A is TU. 4.2 Convergence of Subgradient Method As noted above, the langrangian dual is given by max{FLO(λ)|λ ≥0}. Furthermore, we use a gradient ascent method to update the λ’s as [λt i]n i=1 = max(λt−1 i + θtsi, 0) where st i = 1 − P j xij −oi and θt is the step-size. Now, assuming that the norm of the subgradients are bounded, i.e., ∥s∥2 ≤G and the distance between the initial point and the optimal set, ∥λ1 −λ∗∥2 ≤R, it is known that [13]: |Z(λt) −Z(λ∗)| ≤R2 + G2 Pt i=1 θ2 i 2 Pt i=1 θi This can be used to show that to obtain ϵ accuracy (for any step size), the number of iterations is lower bounded by O(RG/ϵ2), We examine the impact of integrating clustering and outliers on the convergence rate. We make the following observations: Observation 1. At a given iteration t and for a given data point i, if ot i = 1 then P j xt ij = 0 and st i = 0 and therefore λt+1 i = λt i. Observation 2. At a given iteration t and for a given data point i, if ot i = 0 and the point i is assigned to exactly one exemplar, then P j xt ij = 1 and therefore st i = 0 and λt+1 i = λt i. In conjunction with the algorithm for solving FLO(λ) and the above observations we can draw important conclusions regarding the behavior of the algorithm including (i) the λ values associated with outliers will be relatively larger and stabilize earlier and (ii) the λ values of the exemplars will be relatively smaller and will take longer to stabilize. 5 5 Experiments In this section we evaluate the proposed method on both synthetic and real data and compare it to other methods. We first present experiments using synthetic data to show quantitative analysis of the methods in a controlled environment. Then, we present clustering and outlier results obtained on the MNIST image data set. We compare our Langrangian Relaxation (LR) based method to two other methods, k-means-- and an extension of affinity propagation [11] to outlier clustering (APOC) whose details can be found in the supplementary material. Both LR and APOC require a cost for creating clusters. We obtain this value as α ∗median(dij), i.e. the median of all distances multiplied by a scaling factor α which typically is in the range [1, 30]. The initial centroids required by k-means-- are found using k-means++ [14] and unless specified otherwise k-means-- is provided with the correct number of clusters k. 5.1 Synthetic Data We use synthetic datasets for controlled performance evaluation and comparison between the different methods. The data is generated by randomly sampling k clusters with m points, each from d-dimensional normal distributions N(µ, Σ) with randomly selected µ and Σ. To these clusters we add ℓadditional outlier points that have a low probability of belonging to any of the selected clusters. The distance between points is computed using the Euclidean distance. We focus on 2D distributions as they are more challenging then higher dimensional data due to the separability of the data. To assess the performance of the methods we use the following three metrics: 1. Normalized Jaccard index, measures how accurately a method selects the ground truth outliers. It is a coefficient computed between selected outliers O and ground-truth outliers O∗. The final coefficient is normalized with regards to the best possible coefficient obtainable in the following way: J(O, O∗) = |O ∩O∗| |O ∪O∗|/ min(|O|, |O∗|) max(|O|, |O∗|). (23) 2. Local outlier factor [15] (LOF) measures the outlier quality of a point. We compute the ratio between the average LOF of O and O∗, which indicates the quality of the set of selected outliers. 3. V-Measure [16] indicates the quality of the overall clustering solution. The outliers are considered as an additional class for this measure. For the Jaccard index and V-Measure a value of 1 is optimal, while for the LOF factor a larger value is better. Since the number of outliers ℓ, required by all methods, is typically not known exactly we explore how its misspecification affects the results. We generate 2D datasets with 2000 inliers and 200 outliers and vary the number of outliers ℓselected by the methods. The results in Figure 2 show that in general none of the methods fail completely if the value of ℓis misspecified. Looking at the Jaccard index, which indicates the percentage of true outliers selected, we see that if ℓis smaller then the true number of outliers all methods pick only outliers. When ℓis greater then the true number of outliers we can see a that LR and APOC improve with larger ℓwhile k-means-- does only sometimes. This is due to the formulation of LR which selects the largest outliers, which APOC does to some extent as well. This means that if some outliers are initially missed they are more likely to be selected if ℓis larger then the true number of outliers. Looking at the LOF ratio we can see that selecting more outliers then present in the data set reduces the score somewhat but not dramatically, which provides the method with robustness. Finally, V-Measure results show that the overall clustering results remain accurate, even if the number of outliers is misspecified. We experimentally investigate the quality of the solution by comparing with the results obtained by solving the LP relaxation using CPLEX. This comparison indicates what quality can be typically expected from the different methods. Additionally, we can evaluate the speed of these approximations. We evaluate 100 datasets, consisting of 2D Gaussian clusters and outliers, with varying number of 6 100 200 300 400 0 0.5 1 Selected Outliers (ℓ) Jaccard Index 100 200 300 400 0 0.5 1 Selected Outliers (ℓ) LOF Ratio 100 200 300 400 0 0.5 1 Selected Outliers (ℓ) V-Measure k-means-APOC LR Figure 2: The impact of number of outliers specified (ℓ) on the quality of the clustering and outlier detection performance. LR and APOC perform similarly with more stability and better outlier choices compared to k-means--. We can see that overestimating ℓis more detrimental to the overall performance, as indicated by the LOF Ratio and V-Measure, then underestimating it. 0 500 1,000 1,500 2,000 10 20 30 Data Points Speedup APOC LR (a) Speedup over LP 0 5,000 10,000 0 1,000 2,000 Data Points Time (s) APOC LR (b) Total Runtime 0 5,000 10,000 10−4 10−2 100 Data Points Time (s) APOC LR (c) Time per Iteration Figure 3: The graphs shows how the number of points influences different measures. In (a) we compare the speedup of both LR and APOC over LP. (b) compares the total runtime needed to solve the clustering problem for LR and APOC . Finally, (c) plots the time required (on a log scale) for a single iteration for LR and APOC. points. On average LR obtains 94%±5% of the LP objective value, APOC obtains an energy that is 95%±4% of the optimal solution found by LP and k-means--, with correct k, obtains 86%±12% of the optimum. These results reinforce the previous analysis; LR and APOC perform similarly while outperforming k-means--. Next we look at the speed-up of LR and APOC over LP. Figure 3 a) shows both methods are significantly faster with the speed-up increasing as the number of points increases. Overall for a small price in quality the two methods obtain a significantly faster solution. k-means-outperforms the other two methods easily with regards to speed but has neither the accuracy nor the ability to infer the number of clusters directly from the data. Next we compare the runtime of LR and APOC. Figure 3 b) shows the overall runtime of both methods for varying number of data points. Here we observe that APOC is faster then LR, however, by observing the time a single iteration takes, shown in Figure 3 c), we see that LR is much faster on a per iteration basis compared to APOC. In practice LR requires several times the number of iterations of APOC, which is affected by the step size function used. Using a more sophisticated method of computing the step size will provide large gains to LR. Finally, the biggest difference between LR and APOC is that the latter requires all messages and distances to be held in memory. This obviously scales poorly for large datasets. Conversely, LR computes the distances at runtime and only needs to store indicator vectors and a sparse assignment matrix, thus using much less memory. This makes LR amenable to processing large scale datasets. For example, with single precision floating point numbers, dense matrices and 10 000 points APOC requires around 2200 MB of memory while LR only needs 370 MB. Further gains can be obtained by using sparse matrices which is straight forward in the case of LR but complicated for APOC. 5.2 MNIST Data The MNIST dataset, introduced by LeCun et al. [17], contains 28 × 28 pixel images of handwritten digits. We extract features from these images by representing them as 768 dimensional vectors which is reduced to 25 dimensions using PCA. The distance between these vectors is computed using the L2 norm. In Figure 4 we show exemplary results obtained when processing 10 000 digits with the 7 (a) Digit 1 (b) Digit 4 (c) Outliers Figure 4: Each row in (a) and (b) shows a different appearance of a digit captured by a cluster. The outliers shown in (c) tend to have heavier then usual stroke, are incomplete or are not recognizable as a digit. Table 1: Evaluation of clustering results of the MNIST data set with different cost scaling values α for LR and APOC as well as different settings for k-means--. We can see that increasing the cost results in fewer clusters but as a trade off reduces the homogeneity of the clusters. LR APOC k-means-α 5 15 25 15 n.a. n.a. V-Measure 0.52 0.67 0.54 0.53 0.51 0.58 Homogeneity 0.78 0.74 0.65 0.72 0.50 0.75 Completeness 0.39 0.61 0.46 0.42 0.52 0.47 Clusters 120 13 27 51 10 40 LR method with α = 5 and ℓ= 500. Each row in Figure 4 a) and b) shows examples of clusters representing the digits 1 and 4, respectively. This illustrates how different the same digit can appear and the separation induced by the clusters. Figure 4 c) contains a subset of the outliers selected by the method. These outliers have different characteristics that make them sensible outliers, such as: thick stroke, incomplete, unrecognizable or ambiguous meaning. To investigate the influence the cluster creation cost has we run the experiment with different values of α. In Table 1 we show results for LR with values of cost scaling factor α = {5, 15, 25}, APOC with α = 15 and k-means-- with k = {10, 40}. We can see that LR obtains the best V-Measure score out of all methods with α = 15. The homogeneity and completeness scores reflect this as well, while homogeneity is similar to other settings the completeness value is much better. Looking at APOC we see that it struggles to obtain the same quality as LR. In the case of k-means-- we can observed how providing the algorithm with the actual number of clusters results in worse performance compared to a larger number of clusters which highlights the advantage of methods capable of automatically selecting the number of clusters from the data. 6 Conclusion In this paper we presented a novel approach to joint clustering and outlier detection formulated as an integer program. The method only requires pairwise distances and the number of outliers as input and detects the number of clusters directly from the data. Using a Lagrangian relaxation of the problem formulation, which is solved using a subgradient method, we obtain a method that is provably equivalent to a linear programming relaxation. Our proposed algorithm is simple to implement, highly scalable, and has a small memory footprint. The clusters and outliers found by the algorithm are meaningful and easily interpretable. 8 References [1] V. Chandola, A. Banerjee, and V. Kumar. Anomaly detection: A survey. ACM Computing Surveys, 2009. [2] M. Pelillo. What is a Cluster? Perspectives from Game Theory. In Proc. of Advances in Neural Information Processing Systems, 2009. [3] P. Huber and E. Ronchetti. Robust Statistics. Wiley, 2008. [4] C. Croux and A. Ruiz-Gazen. A Fast Algorithm for Robust Principal Components Based on Projection Pursuit. In Proc. in Computational Statistics, 1996. [5] J. Wright, A. Ganesh, S. Rao, Y. Peng, and Y. Ma. Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Matrices by Convex Optimization. In Proc. of Advances in Neural Information Processing Systems, 2009. [6] Emmanuel J. Cand`es, Xiaodong Li, Yi Ma, and John Wright. Robust principal component analysis? J. ACM, 58(3):11:1–11:37, June 2011. ISSN 0004-5411. [7] P.J. Rousseeuw and K.V. Driessen. A fast algorithm for the minimum covariance determinant estimator. Technometrics, 1999. [8] K. Chen. A constant factor approximation algorithm for k-median clustering with outliers. In Proc. of the ACM-SIAM Symposium on Discrete Algorithms, 2008. [9] M. Charikar, S. Khuller, D. M. Mount, and G. Narasimhan. Algorithms for Facility Location Problems with Outliers. In Proc. of the ACM-SIAM Symposium on Discrete Algorithms, 2001. [10] S. Chawla and A. Gionis. k-means–: A Unified Approach to Clustering and Outlier Detection. In SIAM International Conference on Data Mining, 2013. [11] B. Frey and D. Dueck. Clustering by Passing Messages Between Data Points. Science, 2007. [12] D. Bertsimas and R. Weismantel. Optimization over Integers. Dynamic Ideas Belmont, 2005. [13] Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press, New York, NY, USA, 2004. ISBN 0521833787. [14] D. Arthur and S. Vassilvitskii. k-means++: The Advantages of Careful Seeding. In ACM-SIAM Symposium on Discrete Algorithms, 2007. [15] M. Breunig, H. Kriegel, R. Ng, and J. Sander. LOF: Identifying Density-Based Local Outliers. In Int. Conf. on Management of Data, 2000. [16] A. Rosenberg and J. Hirschberg. V-Measure: A conditional entropy-based external cluster evaluation measure. In Proc. of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, 2007. [17] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998. 9
|
2014
|
171
|
5,260
|
Latent Support Measure Machines for Bag-of-Words Data Classification Yuya Yoshikawa Nara Institute of Science and Technology Nara, 630-0192, Japan yoshikawa.yuya.yl9@is.naist.jp Tomoharu Iwata NTT Communication Science Laboratories Kyoto, 619-0237, Japan iwata.tomoharu@lab.ntt.co.jp Hiroshi Sawada NTT Service Evolution Laboratories Kanagawa, 239-0847, Japan sawada.hiroshi@lab.ntt.co.jp Abstract In many classification problems, the input is represented as a set of features, e.g., the bag-of-words (BoW) representation of documents. Support vector machines (SVMs) are widely used tools for such classification problems. The performance of the SVMs is generally determined by whether kernel values between data points can be defined properly. However, SVMs for BoW representations have a major weakness in that the co-occurrence of different but semantically similar words cannot be reflected in the kernel calculation. To overcome the weakness, we propose a kernel-based discriminative classifier for BoW data, which we call the latent support measure machine (latent SMM). With the latent SMM, a latent vector is associated with each vocabulary term, and each document is represented as a distribution of the latent vectors for words appearing in the document. To represent the distributions efficiently, we use the kernel embeddings of distributions that hold high order moment information about distributions. Then the latent SMM finds a separating hyperplane that maximizes the margins between distributions of different classes while estimating latent vectors for words to improve the classification performance. In the experiments, we show that the latent SMM achieves state-of-the-art accuracy for BoW text classification, is robust with respect to its own hyper-parameters, and is useful to visualize words. 1 Introduction In many classification problems, the input is represented as a set of features. A typical example of such features is the bag-of-words (BoW) representation, which is used for representing a document (or sentence) as a multiset of words appearing in the document while ignoring the order of the words. Support vector machines (SVMs) [1], which are kernel-based discriminative learning methods, are widely used tools for such classification problems in various domains, e.g., natural language processing [2], information retrieval [3, 4] and data mining [5]. The performance of SVMs generally depends on whether the kernel values between documents (data points) can be defined properly. The SVMs for BoW representation have a major weakness in that the co-occurrence of different but semantically similar words cannot be reflected in the kernel calculation. For example, when dealing with news classification, ‘football’ and ‘soccer’ are semantically similar and characteristic words for football news. Nevertheless, in the BoW representation, the two words might not affect the computation of the kernel value between documents, because many kernels, e.g., linear, polynomial and 1 Gaussian RBF kernels, evaluate kernel values based on word co-occurrences in each document, and ‘football’ and ‘soccer’ might not co-occur. To overcome this weakness, we can consider the use of the low rank representation of each document, which is learnt by unsupervised topic models or matrix factorization. By using the low rank representation, the kernel value can be evaluated properly between documents without shared vocabulary terms. Blei et al. showed that an SVM using the topic proportions of each document extracted by latent Dirichlet allocation (LDA) outperforms an SVM using BoW features in terms of text classification accuracy [6]. Another naive approach is to use vector representation of words learnt by matrix factorization or neural networks such as word2vec [7]. In this approach, each document is represented as a set of vectors corresponding to words appearing in the document. To classify documents represented as a set of vectors, we can use support measure machines (SMMs), which are a kernel-based discriminative learning method on distributions [8]. However, these low dimensional representations of documents or words might not be helpful for improving classification performance because the learning criteria for obtaining the representation and the classifiers are different. In this paper, we propose a kernel-based discriminative learning method for BoW representation data, which we call the latent support measure machine (latent SMM). The latent SMMs assume that a latent vector is associated with each vocabulary term, and each document is represented as a distribution of the latent vectors for words appearing in the document. By using the kernel embeddings of distributions [9], we can effectively represent the distributions without density estimation while preserving necessary distribution information. In particular, the latent SMMs map each distribution into a reproducing kernel Hilbert space (RKHS), and find a separating hyperplane that maximizes the margins between distributions from different classes on the RKHS. The learning procedure of the latent SMMs is performed by alternately maximizing the margin and estimating the latent vectors for words. The learnt latent vectors of semantically similar words are located close to each other in the latent space, and we can obtain kernel values that reflect the semantics. As a result, the latent SMMs can classify unseen data using a richer and more useful representation than the BoW representation. The latent SMMs find the latent vector representation of words useful for classification. By obtaining two- or three-dimensional latent vectors, we can visualize relationships between classes and between words for a given classification task. In our experiments, we demonstrate the quantitative and qualitative effectiveness of the latent SMM on standard BoW text datasets. The experimental results first indicate that the latent SMM can achieve state-of-the-art classification accuracy. Therefore, we show that the performance of the latent SMM is robust with respect to its own hyper-parameters, and the latent vectors for words in the latent SMM can be represented in a two dimensional space while achieving high classification performance. Finally, we show that the characteristic words of each class are concentrated in a single region by visualizing the latent vectors. The latent SMMs are a general framework of discriminative learning for BoW data. Thus, the idea of the latent SMMs can be applied to various machine learning problems for BoW data, which have been solved by using SVMs: for example, novelty detection [10], structure prediction [11], and learning to rank [12]. 2 Related Work The proposed method is based on a framework of support measure machines (SMMs), which are kernel-based discriminative learning on distributions [8]. Muandet et al. showed that SMMs are more effective than SVMs when the observed feature vectors are numerical and dense in their experiments on handwriting digit recognition and natural scene categorization. On the other hand, when observations are BoW features, the SMMs coincide with the SVMs as described in Section 3.2. To receive the benefits of SMMs for BoW data, the proposed method represents each word as a numerical and dense vector, which is estimated from the given data. The proposed method aims to achieve a higher classification performance by learning a classifier and feature representation simultaneously. Supervised topic models [13] and maximum margin topic models (MedLDA) [14] have been proposed based on a similar motivation but using different approaches. They outperform classifiers using features extracted by unsupervised LDA. There 2 are two main differences between these methods and the proposed method. First, the proposed method plugs the latent word vectors into a discriminant function, while the existing methods plug the document-specific vectors into their discriminant functions. Second, the proposed method can naturally develop non-linear classifiers based on the kernel embeddings of distributions. We demonstrate the effectiveness of the proposed model by comparing the topic model based classifiers in our text classification experiments. 3 Preliminaries In this section, we introduce the kernel embeddings of distributions and support measure machines. Our method in Section 4 will build upon these techniques. 3.1 Representations of Distributions via Kernel Embeddings Suppose that we are given a set of n distributions {Pi}n i=1, where Pi is the ith distribution on space X ⊂Rq. The kernel embeddings of distributions are to embed any distribution Pi into a reproducing kernel Hilbert space (RKHS) Hk specified by kernel k [15], and the distribution is represented as element µPi in the RKHS. More precisely, the element of the ith distribution µPi is defined as follows: µPi := Ex∼Pi[k(·, x)] = ∫ X k(·, x)dPi ∈Hk, (1) where kernel k is referred to as an embedding kernel. It is known that element µPi preserves the properties of probability distribution Pi such as mean, covariance and higher-order moments by using characteristic kernels (e.g., Gaussian RBF kernel) [15]. In practice, although distribution Pi is unknown, we are given a set of samples Xi = {xim}Mi m=1 drawn from the distribution. In this case, by interpreting sample set Xi as empirical distribution ˆPi = 1 Mi ∑Mi m=1 δxim(·), where δx(·) is the Dirac delta function at point x ∈X, empirical kernel embedding ˆµPi is given by ˆµPi = 1 Mi Mi ∑ m=1 k(·, xim) ∈Hk, (2) which can be approximated with an error rate of ||ˆµPi −µPi||Hk = Op(M −1 2 i ) [9]. 3.2 Support Measure Machines Now we consider learning a separating hyper-plane on distributions by employing support measure machines (SMMs). An SMM amounts to solving an SVM problem with a kernel between empirical embedded distributions {ˆµPi}n i=1, called level-2 kernel. A level-2 kernel between the ith and jth distributions is given by K(ˆPi, ˆPj) = ⟨ˆµPi, ˆµPj⟩Hk = 1 MiMj Mi ∑ g=1 Mj ∑ h=1 k(xig, xjh), (3) where kernel k indicates the embedding kernel used in Eq. (2). Although the level-2 kernel Eq.(3) is linear on the embedded distributions, we can also consider non-linear level-2 kernels. For example, a Gaussian RBF level-2 kernel with bandwidth parameter λ > 0 is given by Krbf(ˆPi, ˆPj) = exp ( −λ 2 ||ˆµPi −ˆµPj||2 Hk ) = exp ( −λ 2 (⟨ˆµPi, ˆµPi⟩Hk −2⟨ˆµPi, ˆµPj⟩Hk + ⟨ˆµPj, ˆµPj⟩Hk) ) . (4) Note that the inner-product ⟨·, ·⟩Hk in Eq. (4) can be calculated by Eq. (3). By using these kernels, we can measure similarities between distributions based on their own moment information. The SMMs are a generalization of the standard SVMs. For example, suppose that a word is represented as a one-hot representation vector with vocabulary length, where all the elements are zero except for the entry corresponding to the vocabulary term. Then, a document is represented by adding the one-hot vectors of words appearing in the document. This operation is equivalent to using a linear kernel as its embedding kernel in the SMMs. Then, by using a non-linear kernel as a level-2 kernel like Eq. (4), the SMM for the BoW documents is the same as an SVM with a non-linear kernel. 3 4 Latent Support Measure Machines In this section, we propose latent support measure machines (latent SMMs) that are effective for BoW data classification by learning latent word representation to improve classification performance. The SMM assumes that a set of samples from distribution Pi, Xi, is observed. On the other hand, as described later, the latent SMM assumes that Xi is unobserved. Instead, we consider a case where BoW features are given for each document. More formally, we are given a training set of n pairs of documents and class labels {(di, yi)}n i=1, where di is the ith document that is represented by a multiset of words appearing in the document and yi ∈Y is a class variable. Each word is included in vocabulary set V. For simplicity, we consider binary class variable yi ∈{+1, −1}. The proposed method is also applicable to multi-class classification problems by adopting one-versus-one or oneversus-rest strategies as with the standard SVMs [16]. With the latent SMM, each word t ∈V is represented by a q-dimensional latent vector xt ∈Rq, and the ith document is represented as a set of latent vectors for words appearing in the document Xi = {xt}t∈di. Then, using the kernel embeddings of distributions described in Section 3.1, we can obtain a representation of the ith document from Xi as follows: ˆµPi = 1 |di| ∑ t∈di k(·, xt). Using latent word vectors X = {xt}t∈V and document representation {ˆµPi}n i=1, the primal optimization problem for the latent SMM can be formulated in an analogous but different way from the original SMMs as follows: min w,b,ξ,X,θ 1 2||w||2 +C n ∑ i=1 ξi + ρ 2 ∑ t∈V ||xt||2 2 subject to yi (⟨w, µPi⟩H −b) ≥1−ξi, ξi ≥0, (5) where {ξi}n i=1 denotes slack variables for handling soft margins. Unlike the primal form of the SMMs, that of the latent SMMs includes a ℓ2 regularization term with parameter ρ > 0 with respect to latent word vectors X. The latent SMM minimizes Eq. (5) with respect to the latent word vectors X and kernel parameters θ, along with weight parameters w, bias parameter b and {ξi}n i=1. It is extremely difficult to solve the primal problem Eq. (5) directly because the inner term ⟨w, µPi⟩H in the constrained conditions is in fact calculated in an infinite dimensional space. Thus, we solve this problem by converting it into an another optimization problem in which the inner term does not appear explicitly. Unfortunately, due to its non-convex nature, we cannot derive the dual form for Eq. (5) as with the standard SVMs. Thus we consider a min-max optimization problem, which is derived by first introducing Lagrange multipliers A = {a1, a2, · · · , an} and then plugging w = ∑n i=1 aiˆµPi into Eq (5), as follows: min X,θ max A L(A, X, θ) subject to 0 ≤ai ≤C, n ∑ i=1 aiyi = 0, (6a) where L(A, X, θ) = n ∑ i=1 ai −1 2 n ∑ i=1 n ∑ j=1 aiajyiyjK(ˆPi, ˆPj; X, θ) + ρ 2 ∑ t∈V ||xt||2 2, (6b) where K(ˆPi, ˆPj; X, θ) is a kernel value between empirical distributions ˆPi and ˆPj specified by parameters X and θ as is shown in Eq. (3). We solve this min-max problem by separating it into two partial optimization problems: 1) maximization over A given current estimates ¯X and ¯θ, and 2) minimization over X and θ given current estimates ¯A. This approach is analogous to wrapper methods in multiple kernel learning [17]. Maximization over A. When we fix X and θ in Eq. (6) with current estimate ¯X and ¯θ, the maximization over A becomes a quadratic programming problem as follows: max A n ∑ i=1 ai −1 2 n ∑ i=1 n ∑ j=1 aiajyiyjK(ˆPi, ˆPj; ¯X, ¯θ) subject to 0 ≤ai ≤C, n ∑ i=1 aiyi = 0, (7) which is identical to solving the dual problem of the standard SVMs. Thus, we can obtain optimal A by employing an existing SVM package. 4 Table 1: Dataset specifications. # samples # features # classes WebKB 4,199 7,770 4 Reuters-21578 7,674 17,387 8 20 Newsgroups 18,821 70,216 20 Minimization over X and θ. When we fix A in Eq. (6) with current estimate ¯A, the min-max problem can be replaced with a simpler minimization problem as follows: min X,θ l(X, θ), where l(X, θ) = −1 2 n ∑ i=1 n ∑ j=1 ¯ai¯ajyiyjK(ˆPi, ˆPj; X, θ) + ρ 2 ∑ t∈V ||xt||2 2. (8) To solve this problem, we use a quasi-Newton method [18]. The quasi-Newton method needs the gradient of parameters. For each word m ∈V, the gradient of latent word vector xm is given by ∂l(X, θ) ∂xm = −1 2 n ∑ i=1 n ∑ j=1 ¯ai¯ajyiyj ∂K(ˆPi, ˆPj; X, θ) ∂xm + ρxm, (9) where the gradient of the kernel with respect to xm depends on the choice of kernels. For example, when choosing a embedding kernel as a Gaussian RBF kernel with bandwidth parameter γ > 0: kγ(xs, xt) = exp(−γ 2 ||xs −xt||2 Hk), and a level-2 kernel as a linear kernel, the gradient is given by ∂K(ˆPi, ˆPj; X, θ) ∂xm = 1 |di||dj| ∑ s∈di ∑ t∈dj kγ(xs, xt) × { γ(xt −xs) (m = s ∧m ̸= t) γ(xs −xt) (m = t ∧m ̸= s) 0 (m = t ∧m = s). As with the estimation of X, kernel parameters θ can be obtained by calculating gradient ∂l(X,θ) ∂θ . By alternately repeating these computations until dual function Eq. (6) converges, we can find a local optimal solution to the min-max problem. The parameters that need to be stored after learning are latent word vectors X, kernel parameters θ and Lagrange multipliers A. Classification for new document d∗is performed by computing y(d∗) = ∑n i=1 aiyiK(ˆPi, ˆP∗; X, θ), where ˆP∗is the distribution of latent vectors for words included in d∗. 5 Experiments with Bag-of-Words Text Classification Data description. For the evaluation, we used the following three standard multi-class text classification datasets: WebKB, Reuters-21578 and 20 Newsgroups. These datasets, which have already been preprocessed by removing short and stop words, are found in [19] and can be downloaded from the author’s website1. The specifications of these datasets are shown in Table 1. For our experimental setting, we ignored the original training/test data separations. Setting. In our experiments, the proposed method, latent SMM, uses a Gaussian RBF embedding kernel and a linear level-2 kernel. To demonstrate the effectiveness of the latent SMM, we compare it with several methods: MedLDA, SVD+SMM, word2vec+SMM and SVMs. MedLDA is a method that jointly learns LDA and a maximum margin classifier, which is a state-of-the-art discriminative learning method for BoW data [14]. We use the author’s implementation of MedLDA2. SVD+SMM is a two-step procedure: 1) extracting low-dimensional representations of words by using a singular value decomposition (SVD), and 2) learning a support measure machine using the distribution of extracted representations of words appearing in each document with the same kernels as the latent SMM. word2vec+SMM employs the representations of words learnt by word2vec [7] and uses them for the SMM as in SVD+SMM. Here we use pre-trained 300 dimensional word representation vectors from the Google News corpus, which can be downloaded from the author’s website3. Note that word2vec+SMM utilizes an additional resource to represent the latent vectors for words unlike the 1http://web.ist.utl.pt/acardoso/datasets/ 2http://www.ml-thu.net/˜jun/medlda.shtml 3https://code.google.com/p/word2vec/ 5 (a) WebKB (b) Reuters-21578 (c) 20 Newsgroups Figure 1: Classification accuracy over number of training samples. (a) WebKB (b) Reuters-21578 (c) 20 Newsgroups Figure 2: Classification accuracy over the latent dimensionality. latent SMM, and the learning of word2vec requires n-gram information about documents, which is lost in the BoW representation. With SVMs, we use a Gaussian RBF kernel with parameter γ and a quadratic polynomial kernel, and the features are represented as BoW. We use LIBSVM4 to estimate Lagrange multipliers A in the latent SMM and to build SVMs and SMMs. To deal with multi-class classification, we adopt a one-versus-one strategy [16] in the latent SMM, SVMs and SMMs. In our experiments, we choose the optimal parameters for these methods from the following variations: γ ∈{10−3, 10−2, · · · , 103} in the latent SMM, SVD+SMM, word2vec+SMM and SVM with a Gaussian RBF kernel, C ∈{2−3, 2−1, · · · , 25, 27} in all the methods, regularizer parameter ρ ∈{10−2, 10−1, 100}, latent dimensionality q ∈{2, 3, 4} in the latent SMM, and the latent dimensionality of MedLDA and SVD+SMM ranges {10, 20, · · · , 50}. Accuracy over number of training samples. We first show the classification accuracy when varying the number of training samples. Here we randomly chose five sets of training samples, and used the remaining samples for each of the training sets as the test set. We removed words that occurred in less than 1% of the training documents. Below, we refer to the percentage as a word occurrence threshold. As shown in Figure 1, the latent SMM outperformed the other methods for each of the numbers of training samples in the WebKB and Reuters-21578 datasets. For the 20 Newsgroups dataset, the accuracies of the latent SMM, MedLDA and word2vec+SMM were proximate and better than those of SVD+SMM and SVMs. The performance of SVD+SMM changed depending on the datasets: while SVD+SMM was the second best method with the Reuters-21578, it placed fourth with the other datasets. This result indicates that the usefulness of the low rank representations by SVD for classification depends on the properties of the dataset. The high classification performance of the latent SMM for all of the datasets demonstrates the effectiveness of learning the latent word representations. Robustness over latent dimensionality. Next we confirm the robustness of the latent SMM over the latent dimensionality. For this experiment, we changed the latent dimensionality of the latent SMM, MedLDA and SVD+SMM within {2, 4, · · · , 12}. Figure 2 shows the accuracy when varying the latent dimensionality. Here the number of training samples in each dataset was 600, and the word occurrence threshold was 1%. For all the latent dimensionality, the accuracy of the latent SMM was consistently better than the other methods. Moreover, even with two-dimensional latent 4http://www.csie.ntu.edu.tw/˜cjlin/libsvm/ 6 Figure 3: Classification accuracy on WebKB when varying word occurrence threshold. Figure 4: Parameter sensitivity on Reuters-21578. project course faculty student Figure 5: Distributions of latent vectors for words appearing in documents of each class on WebKB. vectors, the latent SMM achieved high classification performance. On the other hand, MedLDA and SVD+SMM often could not display their own abilities when the latent dimensionality was low. One of the reasons why the latent SMM with a very low latent dimensionality q achieves a good performance is that it can use q|di| parameters to classify the ith document, while MedLDA uses only q parameters. Since the latent word representation used in SVD+SMM is not optimized for the given classification problem, it does not contain useful features for classification, especially when the latent dimensionality is low. Accuracy over word occurrence threshold. In the above experiments, we omit words whose occurrence accounts for less than 1% of the training document. By reducing the threshold, low frequency words become included in the training documents. This might be a difficult situation for the latent SMM and SVD+SMM because they cannot observe enough training data to estimate their own latent word vectors. On the other hand, it would be an advantageous situation for SVMs using BoW features because they can use low frequency words that are useful for classification to compute their kernel values. Figure 3 shows the classification accuracy on WebKB when varying the word occurrence threshold within {0.4, 0.6, 0.8, 1.0}. The performance of the latent SMM did not change when the thresholds were varied, and was better than the other methods in spite of the difficult situation. Parameter sensitivity. Figure 4 shows how the performance of the latent SMM changes against ℓ2 regularizer parameter ρ and C on a Reuters-21578 dataset with 1,000 training samples. Here the latent dimensionality of the latent SMM was fixed at q = 2 to eliminate the effect of q. The performance is insensitive to ρ except when C is too small. Moreover, we can see that the performance is improved by increasing the C value. In general, the performance of SVM-based methods is very sensitive to C and kernel parameters [20]. Since kernel parameters θ in the latent SMM are estimated along with latent vectors X, the latent SMM can avoid the problem of sensitivity for the kernel parameters. In addition, Figure 2 has shown that the latent SMM is robust over the latent dimensionality. Thus, the latent SMM can achieve high classification accuracy by focusing only on tuning the best C, and experimentally the best C exhibits a large value, e.g., C ≥25. Visualization of classes. In the above experiments, we have shown that the latent SMM can achieve high classification accuracy with low-dimensional latent vectors. By using two- or threedimensional latent vectors in the latent SMM, and visualizing them, we can understand the relationships between classes. Figure 5 shows the distributions of latent vectors for words appearing 7 Complete view (50% sampling) (a) (c) (b) (d) Figure 6: Visualization of latent vectors for words on WebKB. The font color of each word indicates the class in which the word occurs most frequently, and ‘project’, ‘course’, ‘student’ and ‘faculty’ classes correspond to yellow, red, blue and green fonts, respectively. in documents of each class. Each class has its own characteristic distribution that is different from those of other classes. This result shows that the latent SMM can extract the difference between the distributions of the classes. For example, the distribution of ‘course’ is separated from those of the other classes, which indicates that documents categorized in ‘course’ share few words with documents categorized in other classes. On the other hand, the latent words used in the ‘project’ class are widely distributed, and its distribution overlaps those of the ‘faculty’ and ‘student’ classes. This would be because faculty and students work jointly on projects, and words in both ‘faculty’ and ‘student’ appear simultaneously in ‘project’ documents. Visualization of words. In addition to the visualization of classes, the latent SMM can visualize words using two- or three-dimensional latent vectors. Unlike unsupervised visualization methods for documents, e.g., [21], the latent SMM can gather characteristic words of each class in a region. Figure 6 shows the visualization result of words on the WebKB dataset. Here we used the same learning result as that used in Figure 5. As shown in the complete view, we can see that highlyfrequent words in each class tend to gather in a different region. On the right side of this figure, four regions from the complete view are displayed in closeup. Figures (a), (b) and (c) include words indicating ‘course’, ‘faculty’ and ‘student’ classes, respectively. For example, figure (a) includes ‘exercise’, ’examine’ and ‘quiz’ which indicate examinations in lectures. Figure (d) includes words of various classes, although the ‘project’ class dominates the region as shown in Figure 5. This means that words appearing in the ‘project’ class are related to the other classes or are general words, e.g., ‘occur’ and ‘differ’. 6 Conclusion We have proposed a latent support measure machine (latent SMM), which is a kernel-based discriminative learning method effective for sets of features such as bag-of-words (BoW). The latent SMM represents each word as a latent vector, and each document to be classified as a distribution of the latent vectors for words appearing in the document. Then the latent SMM finds a separating hyperplane that maximizes the margins between distributions of different classes while estimating latent vectors for words to improve the classification performance. The experimental results can be summarized as follows: First, the latent SMM has achieved state-of-the-art classification accuracy for BoW data. Second, we have shown experimentally that the performance of the latent SMM is robust as regards its own hyper-parameters. Third, since the latent SMM can represent each word as a two- or three- dimensional latent vector, we have shown that the latent SMMs are useful for understanding the relationships between classes and between words by visualizing the latent vectors. Acknowledgment. This work was supported by JSPS Grant-in-Aid for JSPS Fellows (259867). 8 References [1] Corinna Cortes and Vladimir Vapnik. Support-Vector Networks. Machine Learning, 20(3):273–297, September 1995. [2] Taku Kudo and Yuji Matsumoto. Chunking with Support Vector Machines. Proceedings of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies, 816, 2001. [3] Dell Zhang and Wee Sun Lee. Question Classification Using Support Vector Machines. SIGIR, page 26, 2003. [4] Changhua Yang, Kevin Hsin-Yih Lin, and Hsin-Hsi Chen. Emotion Classification Using Web Blog Corpora. IEEE/WIC/ACM International Conference on Web Intelligence, pages 275–278, November 2007. [5] Pranam Kolari, Tim Finin, and Anupam Joshi. SVMs for the Blogosphere: Blog Identification and Splog Detection. AAAI Spring Symposium: Computational Approaches to Analyzing Weblogs, 2006. [6] David M. Blei, Andrew Y. Ng, and M. Jordan. Latent Dirichlet Allocation. The Journal of Machine Learning Research, 3(4-5):993–1022, May 2003. [7] Tomas Mikolov, I Sutskever, and Kai Chen. Distributed Representations of Words and Phrases and their Compositionality. NIPS, pages 1–9, 2013. [8] Krikamol Muandet and Kenji Fukumizu. Learning from Distributions via Support Measure Machines. NIPS, 2012. [9] Alex Smola, Arthur Gretton, Le Song, and B Sch¨olkopf. A Hilbert Space Embedding for Distributions. Algorithmic Learning Theory, 2007. [10] Bernhard Sch¨olkopf, Robert Williamson, Alex Smola, John Shawe-Taylor, and John Platt. Support Vector Method for Novelty Detection. NIPS, pages 582–588, 1999. [11] Ioannis Tsochantaridis, Thomas Hofmann, Thorsten Joachims, and Yasemin Altun. Support Vector Machine Learning for Interdependent and Structured Output Spaces. ICML, page 104, 2004. [12] Thorsten Joachims. Optimizing Search Engines Using Clickthrough Data. SIGKDD, page 133, 2002. [13] David M. Blei and Jon D. McAuliffe. Supervised Topic Models. NIPS, pages 1–8, 2007. [14] Jun Zhu, A Ahmed, and EP Xing. MedLDA: Maximum Margin Supervised Topic Models for Regression and Classification. ICML, 2009. [15] BK Sriperumbudur and A Gretton. Hilbert Space Embeddings and Metrics on Probability Measures. The Journal of Machine Learning Research, 11:1517–1561, 2010. [16] Chih-Wei Hsu and Chih-Jen Lin. A Comparison of Methods for Multi-class Support Vector Machines. Neural Networks, IEEE Transactions on, 13(2):415—-425, 2002. [17] S¨oren Sonnenburg and G R¨atsch. Large Scale Multiple Kernel Learning. The Journal of Machine Learning Research, 7:1531–1565, 2006. [18] Dong C. Liu and Jorge Nocedal. On the Limited Memory BFGS Method for Large Scale Optimization. Mathematical Programming, 45(1-3):503–528, August 1989. [19] Ana Cardoso-Cachopo. Improving Methods for Single-label Text Categorization. PhD thesis, 2007. [20] Vladimir Cherkassky and Yunqian Ma. Practical Selection of SVM Parameters and Noise Estimation for SVM Regression. Neural networks : the official journal of the International Neural Network Society, 17(1):113–26, January 2004. [21] Tomoharu Iwata, T Yamada, and N Ueda. Probabilistic Latent Semantic Visualization: Topic Model for Visualizing Documents. SIGKDD, 2008. 9
|
2014
|
172
|
5,261
|
Sparse Polynomial Learning and Graph Sketching Murat Kocaoglu1∗, Karthikeyan Shanmugam1†, Alexandros G.Dimakis1‡, Adam Klivans2⋆ 1Department of Electrical and Computer Engineering, 2Department of Computer Science The University of Texas at Austin, USA ∗mkocaoglu@utexas.edu, †karthiksh@utexas.edu ‡dimakis@austin.utexas.edu, ⋆klivans@cs.utexas.edu Abstract Let f : {−1, 1}n →R be a polynomial with at most s non-zero real coefficients. We give an algorithm for exactly reconstructing f given random examples from the uniform distribution on {−1, 1}n that runs in time polynomial in n and 2s and succeeds if the function satisfies the unique sign property: there is one output value which corresponds to a unique set of values of the participating parities. This sufficient condition is satisfied when every coefficient of f is perturbed by a small random noise, or satisfied with high probability when s parity functions are chosen randomly or when all the coefficients are positive. Learning sparse polynomials over the Boolean domain in time polynomial in n and 2s is considered notoriously hard in the worst-case. Our result shows that the problem is tractable for almost all sparse polynomials. Then, we show an application of this result to hypergraph sketching which is the problem of learning a sparse (both in the number of hyperedges and the size of the hyperedges) hypergraph from uniformly drawn random cuts. We also provide experimental results on a real world dataset. 1 Introduction Learning sparse polynomials over the Boolean domain is one of the fundamental problems from computational learning theory and has been studied extensively over the last twenty-five years [1– 6]. In almost all cases, known algorithms for learning or interpolating sparse polynomials require query access to the unknown polynomial. An outstanding open problem is to find an algorithm for learning s-sparse polynomials with respect to the uniform distribution on {−1, 1}n that runs in time polynomial in n and g(s) (where g is any fixed function independent of n) and requires only randomly chosen examples to succeed. In particular, such an algorithm would imply a breakthrough result for the problem of learning k-juntas (functions that depend on only k ≪n input variables; it is not known how to learn ω(1)-juntas in polynomial time). We present an algorithm and a set of natural conditions such that any sparse polynomial f satisfying these conditions can be learned from random examples in time polynomial in n and 2s. In particular, any f whose coefficients have been subjected to a small perturbation (smoothed analysis setting) satisfies these conditions (for example, if a Gaussian with arbitrarily small variance is added independently to each coefficient, f satisfies these conditions with probability 1). We state our main result here: Theorem 1. Let f be an s-sparse function that satisfies at least one of the following properties: a) (smoothed analysis setting)The coefficients {ci}s i=1 are in general position or all of them are perturbed by a small random noise. b) The s parity functions are linearly independent. c) All the coefficients are positive. Then we learn f with high probability in time poly(n, 2s). 1 We note that smoothed-analysis, pioneered in [7], has now become a common alternative for problems that seem intractable in the worst-case. Our algorithm also succeeds in the presence of noise: Theorem 2. Let f = f1 + f2 be a polynomial such that f1 and f2 depend on mutually disjoint set of parity functions. f1 is s-sparse and the values of f1 are ‘well separated’. Further, ∥f2∥1 ≤ν, (i.e., f is approximately sparse). If observations are corrupted by additive noise bounded by ϵ, then there exists an algorithm which takes ϵ + ν as an input, that gives g in time polynomial in n and 2s such that ∥f −g∥2 ≤O(ν + ϵ) with high probability. The treatment of the noisy case, i.e., the formal statement of this theorem, the corresponding algorithm, and the related proofs are relegated to the supplementary material. All these results are based on what we call as the unique sign property: If there is one value that f takes which uniquely specifies the signs of the parity functions involved, then the function is efficiently learnable. Note that our results cannot be used for learning juntas or other Boolean-valued sparse polynomials, since the unique sign property does not hold in these settings. We show that this property holds for the complement of the cut function on a hypergraph (no. of hyperedges −cut value). This fact can be used to learn the cut complement function and eventually infer the structure of a sparse hypergraph from random cuts. Sparsity implies that the number of hyperedges and the size of each hyperedge is of constant size. Hypergraphs can be used to represent relations in many real world data sets. For example, one can represent the relation between the books and the readers (users) on the Amazon dataset with a hypergraph. Book titles and Amazon users can be mapped to nodes and hyperedges, respectively ([8]). Then a node belongs to a hyperedge, if the corresponding book is read by the user represented by that hyperedge. When such graphs evolve over time (and space), the difference graph filtered by time and space is often sparse. To locate and learn the few hyperedges from random cuts in such difference graphs constitutes hypergraph sketching. We test our algorithms on hypergraphs generated from the dataset that contain the time stamped record of messages between Yahoo! messenger users marked with the user locations (zip codes). 1.1 Approach and Related Work The problem of recovering the sparsest solution of a set of underdetermined linear equations has received significant recent attention in the context of compressed sensing [9–11]. In compressed sensing, one tries to recover an unknown sparse vector using few linear observations (measurements), possibly in the presence of noise. The recent papers [12,13] are of particular relevance to us since they establish a connection between learning sparse polynomials and compressed sensing. The authors show that the problem of learning a sparse polynomial is equivalent to recovering the unknown sparse coefficient vector using linear measurements. By applying techniques from compressed sensing theory, namely Restricted Isometry Property (see [12]) and incoherence (see [13]), the authors independently established results for reconstructing sparse polynomials using convex optimization. The results have near-optimal sample complexity. However, the running time of these algorithms is exponential in the underlying dimension, n. This is because the measurement matrix of the equivalent compressed sensing problem requires one column for every possible non-zero monomial. In this paper, we show how to solve this problem in time polynomial in n and 2s under the assumption of unique sign property on the sparse polynomial. Our key contribution is a novel identification procedure that can reduce the list of potentially non-zero coefficients from the naive bound of 2n to 2s when the function has this property. On the theoretical side, there has been interesting recent work of [14] that approximately learns sparse polynomial functions when the underlying domain is Gaussian. Their results do not seem to translate to the Boolean domain. We also note the work of [15] that gives an algorithm for learning sparse Boolean functions with respect to a randomly chosen product distribution on {−1, 1}n. Their work does not apply to the uniform distribution on {−1, 1}n. On the practical side, we give an application of the theory to the problem of hypergraph sketching. We generalize a prior work [12] that applied the compressed sensing approach discussed before to 2 graph sketching on evolving social network graphs. In our algorithm, while the sample complexity requirements are higher, the time complexity is greatly reduced in comparison. We test our algorithms on a real dataset and show that the algorithm is able to scale well on sparse hypergraphs created out of Yahoo! messenger dataset by filtering through time and location stamps. 2 Definitions Consider a real-valued function over the Boolean hypercube f : {−1, 1}n →R. Given a sequence of labeled samples of the form ⟨f(x), x⟩, where x is sampled from the uniform distribution U over the hypercube {−1, 1}n, we are interested in an efficient algorithm that learns the function f with high probability. Through Fourier expansion, f can be written as a linear combination of monomials: f (x) = X S⊆[n] cSχS(x), ∀x ∈{−1, 1}n (1) where [n] is the set of integers from 1 to n, χS(x) = Q i∈S xi and cS ∈R. Let c be the vector of coefficients cS. A monomial χS (x) is also called a parity function. More background on Boolean functions and the Fourier expansion can be found in [16]. In this work, we restrict ourselves to sparse polynomials f with sparsity s in the Fourier domain, i.e., f is a linear combination of unknown parity functions χS1(x), χS2(x), . . . χSs (x) with s unknown real coefficients given by {cSi}s i=1 such that cSi ̸= 0, ∀1 ≤i ≤s; all other coefficients are 0. Let the subsets corresponding to the s parity functions form a family of sets I = {Si}s i=1. Finding I is equivalent to finding the s parity functions. Note: In certain places, where the context makes it clear, we slightly abuse the notation such that the set Si identifying a specific parity function is replaced by just the index i. The coefficients may be denoted simply by ci and the parity functions by χi (·). Let F2 denote the binary field. Every parity function χi(·) can be represented by a vector pi ∈Fn×1 2 . The j-th entry pi(j) in the vector pi is 1, if j ∈Si and is 0 otherwise. Definition 1. A set of s parity functions {χi(·)}s i=1 are said to be linearly independent if the corresponding set of vectors {pi}s i=1 are linearly independent over F2. Similarly, they are said to have rank r if the dimension of the subspace spanned by {pi}s i=1 is r. Definition 2. The coefficients {ci}s i=1 are said to be in general position if for all possible set of values bi ∈{0, 1, −1}, ∀1 ≤i ≤s, with at least one nonzero bi, sP i=1 cibi ̸= 0 Definition 3. The coefficients {ci}s i=1 are said to be µ-separated if for all possible set of values bi ∈{0, 1, −1}, ∀1 ≤i ≤s with at least one nonzero bi, sP i=1 cibi > µ. Definition 4. A sign pattern is a distinct vector of signs a = [χ1 (·) , χ2 (·) , . . . χs (·))] ∈ {−1, 1}1×s assumed by the set of s parity functions. Since this work involves switching representations between the real and the binary field, we define a function q that does the switch. Definition 5. q : {−1, 1}a×b →Fa×b 2 is a function that converts a sign matrix X to a matrix Y over F2 such that Yij = q(Xij) = 1 ∈F2, if Xij = −1 and Yij = q(Xij) = 0 ∈F2, if Xij = 1. Clearly, it has an inverse function q−1 such that q−1(Y) = X. We also present some definitions to deal with the case when the polynomial f is not exactly s-sparse and observations are noisy. Let 2[n] denote the power set of [n]. Definition 6. A polynomial f : {−1, 1}n →R is called approximately (s, ν)-sparse if there exists I ⊂2[n] with |I| = s such that P S∈Ic|cS| < ν, where {cS} are the Fourier coefficients as in (1). In other words, the sum of the absolute values of all the coefficients except the ones corresponding to I are rather small. 3 3 Problem Setting Suppose m labeled samples ⟨f (x) , x⟩m i=1 are drawn from the uniform distribution U on the Boolean hypercube. For any B ⊆2[n], let cB ∈R2n×1 be the vector of real coefficients such that cB(S) = cS, ∀S ∈B and cB(S) = 0, ∀S /∈B. Let A ∈Rm×2n be such that every row of A corresponds to one random input sample x ∼U. Let x also denote the row index and S ⊆[n] denote the column index of A. A(x, S) = χS (x). Let AS denote the sub matrix formed by the columns corresponding to the subsets in S. Let I be the set consisting of the s parity functions of interest in both the sparse and the approximately sparse cases. A sparse representation of an approximately (s, ν)-sparse function f is fI = A(x) cI, where cI is as defined above. We review the compressed sensing framework used in [12] and [13]. Specifically, for the remainder of the paper, we rely on [13] as a point of reference. We review their framework and explain how we use it to obtain our results, particularly for the noisy case. Let y ∈Rm and βS ∈R2n, such that βS = 0, ∀S ⊆Sc. Note that, here S is a subset of the power set 2[n]. Now, consider the following convex program for noisy compressed sensing in this setting: min∥βS∥1 subject to r 1 m∥AβS −y∥2 ≤ϵ. (2) Let βopt S be an optimum for the program (2). Note that only the columns of A in S are used in the program. The convex program runs in time poly (m, |S|). The incoherence property of the matrix A in [13] implies the following. Theorem 3. ( [13]) For any family of subsets I ∈2[n] such that |I| = s, m = 4096ns2 and c1 = 4, c2 = 8, for any feasible point βS of program 2, we have: ∥βS −βopt S ∥2 ≤c1ϵ + c2 n m 1/4 ∥βIc T S∥1 (3) with probability at least 1 −O 1 4n When S is set to the power set 2[n], ϵ = 0 and y is the vector of observed values for an s-sparse polynomial, the s-sparse vector cI is a feasible point to program (2). By Theorem 3, the program recovers the sparse vector cI and hence learns the function. The only caveat is that the complexity is exponential in n. The main idea behind our algorithms for noiseless and noisy sparse function learning is to ‘capture’ the actual s-sparse set I of interest in a small set S : |S| = O (2s) of coefficients by a separate algorithm that runs in time poly(n, 2s). Using the restricted set of coefficients S, we search for the sparse solution under the noisy and noiseless cases using program (2). Lemma 1. Given an algorithm that runs in time poly(n, 2s) and generates a set of parities S such that |S| = O (2s) , I ⊆S with |I| = s, program (2) with S and m = 4096ns2 random samples as inputs runs in time poly(n, 2s) and learns the correct function with probability 1 −O 1 4n . Unique Sign Pattern Property: The key property that lets us find a small S efficiently is the unique sign pattern property. Observe that an s-sparse function can produce at most 2s different real values. If the maximum value obtained always corresponds to a unique pattern of signs of parities, by looking only at the random samples x corresponding to the subsequent O(n) occurrences of this maximum value, we show that all the parity functions needed to learn f are captured in a small set of size 2s+1 (see Lemma 2 and its proof). The unique sign property again plays an important role, along with Theorem 3 with more technicalities added, in the noisy case, which we visit in Section 2 of the supplementary material. In the next section, we provide an algorithm to generate the bounded set S for the noiseless case for an s-sparse function f and provide guarantees for the algorithm formally. 4 Algorithm and Guarantees: Noiseless case Let I be the family of s subsets {Si}s i=1 each corresponding to the s parity functions χSi (·) in an s-sparse function f. In this section, we provide an algorithm, named LearnBool, that finds a small 4 subset S of the power set 2[n] that contains elements of I first and then uses program (2) with S. We show that the algorithm learns f in time poly (n, 2s) from uniformly randomly drawn labeled samples from the Boolean hypercube with high probability under some natural conditions. Recall that if the function is such that f(x) attains its maximum value only if [χ1(x), χ2 (x) . . . χs (x)] = amax ∈{−1, 1}s for some unique sign pattern amax, then the function is said to possess the unique sign property. Now we state the main technical lemma for the unique sign property. Lemma 2. If an s-sparse function f has the unique sign property then, in Algorithm 1, S is such that I ⊆S, |S| ≤2s+1 with probability 1 −O 1 n and runs in time poly(n, 2s). Proof. See the supplementary material. The proof of the above lemma involves showing that the random matrix Ymax (see Algorithm 1) has rank at least n −s, leading to at most 2s solutions for each equation in (4). The feasible solutions can be obtained by Gaussian elimination in the binary field. Theorem 4. Let f be an s-sparse function that satisfies at least one of the following properties: (a) The coefficients {ci}s i=1 are in general position. (b) The s parity functions are linearly independent. (c) All the coefficients are positive. Given labeled samples, Algorithm 1 learns f exactly (or vopt = c) in time poly (n, 2s) with probability 1 −O 1 n . Proof. See the supplementary material. Smoothed Analysis Setting: Perturbing ci’s with Gaussian random variables of standard deviation σ > 0 or by random variables drawn from any set of reasonable continuous distributions ensures that the perturbed function satisfies property (a) with probability 1. Random Parity Functions: When ci’s are arbitrary and the set of s parity functions are drawn uniformly randomly from 2[n], then property (b) holds with high probability if s is a constant. Input: Sparsity parameter s, m1 = 2n2s random labeled samples {⟨f (xi) , xi⟩}m1 i=1. Pick samples {xij}nmax j=1 corresponding to the maximum value of f observed in all the m samples. Stack all xij row wise into a matrix Xmax of dimensions nmax × n. Initialise S = ∅. Let Ymax = q (Xmax). Find all feasible solutions p ∈Fn×1 2 such that: 1nmax×1 = Ymaxp or 0nmax×1 = Ymaxp (4) Collect all feasible solutions p to either of the above equations in the set P ⊆Fn×1 2 . S = {{j ∈[n] : p(j) = 1}|p ∈P}. Using m = 4096ns2 more samples (number of rows of A is m corresponding to these new samples), solve: βopt S = min∥βS∥1 such that AβS = y, (5) where y is the vector of m observed values. Set vopt = βopt S . Output: vopt. Algorithm 1: LearnBool 5 A Sparse Polynomial Learning Application: Hypergraph Sketching Hypergraphs can be used to model the relations in real world data sets (e.g., books read by users in Amazon). We show that the cut functions on hypergraphs satisfy the unique sign property. Learning a cut function of a sparse hypergraph from random cuts is a special case of learning a sparse 5 polynomial from samples drawn uniformly from the Boolean hypercube. To track the evolution of large hypergraphs over a small time interval, it is enough to learn the cut function of the difference graph which is often sparse. This is called the graph sketching problem. Previously, graph sketching was applied to social network evolution [12]. We generalize this to hypergraphs showing that they satisfy the unique sign property, which enable faster algorithms, and provide experimental results on real data sets. 5.1 Graph Sketching A hypergraph G = (V, E) is a set of vertices V along with a set E of subsets of V called the hyperedges. The size of a hyperedge is the number of variables that the hyperedge connects. Let d be the maximum hyperedge size of graph G. Let |V | = n and |E| = s. A random cut S ⊆V is a set of vertices selected uniformly at random. Define the value of the cut S to be c(S) = |{e ∈E : e T S ̸= ∅, e T V −S ̸= ∅}|. Graph sketching is the problem of identifying the graph structure from random queries that evaluate the value of a random cut, where s ≪n (sparse setting). Hypergraphs naturally specify relations among a set of objects through hyperedges. For example, Amazon users can form the set E and Amazon books can form the set V . Each user may read a subset of books which represents the hyperedge. Learning the hypergraph corresponds to identifying the sets of books bought by each user. For more examples of hypergraphs in real data sets, we refer the reader to [8]. Such hypergraphs evolve over time. The difference graph between two consecutive time instants is expected to be sparse (number of edges s and maximum hyperedge size d are small). We are interested in learning such hypergraphs from random cut queries. For simplicity and convenience, we consider the cut complement query, i.e., c−cut, which returns s −c(S). One can easily represent the c−cut query with a sparse polynomial as follows: Let node i correspond to variable xi ∈{−1, +1}. A random cut involves choosing xi uniformly randomly from {−1, +1}. The variables assigned to +1 belong to the random cut S. The value is given by the polynomial fc−cut(x) = X I∈E Y i∈I (1 + xi) 2 + Y i∈I (1 −xi) 2 ! = X I∈E 1 2|I|−1 X J ⊆I, |J |is even (1 + Y i∈J xi) . (6) Hence, the c−cut function is a sparse polynomial where the sparsity is at most s2d−1. The variables corresponding to the nodes that belong to some hyperedge appear in the polynomial. We call these the relevant variables and the number of relevant variables is denoted by k. Note that, in our sparse setting k ≤sd. We note that for a hypergraph with no singleton hyperedge, given the c−cut function, it is easy to recover the hyper edges from (6). Therefore, we focus on learning the c−cut function to sketch the hypergraph. When G is a graph with edges (of cardinality 2), the compressed sensing approach (using program 2) using the cut (or c−cut) values as measurements is shown to be very efficient in [12] in terms of the sample complexity, i.e., the required number of queries. The run time is efficient because total number of candidate parities is O(n2). However when we consider hypergraphs, i.e., when d is a large constant, the compressed sensing approach cannot scale computationally (poly(nd) runtime). Here, based on the theory developed, we give a faster algorithm based on the unique sign property with sample complexity m1 = O(2kd log n + 22d+1s2(log n + k)) and run time of O(m12k, n2 log n)). We observe that the c−cut polynomial satisfies the unique sign property. From (6), it is evident that the polynomial has only positive coefficients. Therefore, by Theorem 4, algorithm LearnBool succeeds. The maximum value of the c−cut function is the number of edges. Notice that the maximum value is definitely observed in two configurations of the relevant variables: If either all relevant variables are +1 or all are −1. Therefore, the maximum value is observed in every 2k−1 ≤ 2sd samples. Thus, a direct application of LearnBool yields poly(n, 2k−1) time complexity, which improves the O(nd) bound for small s and d. Improving further, we provide a more efficient algorithm tailored for the hypergraph sketching problem, which makes use of the unique sign property and some other properties of the cut function. Algorithm LearnGraph (Algorithm 4) is provided in the supplementary material. 6 0 200 400 600 800 1000 10 0 10 1 10 2 10 3 10 4 Runtime of LearnGraph vs. standard compressed sensing No. of variables, n Runtime (seconds) LearnGraph Comp. Sensing (a) Runtime vs. # of variables, d = 3 and s = 1. 1 2 3 4 5 6 7 8 9 10 0.1 0.15 0.2 0.25 α (# of samples/n) Prob. of Error Error Probability vs. α Setting 1 Setting 3 Setting 2 Setting 4 (b) Probability of error vs. α. Figure 1: Performance figures comparing LearnGraph and Compressed Sensing approach. Theorem 5. Algorithm 4 exactly learns the c−cut function with probability 1 −O( 1 n)with sample complexity m1 = O(2kd log n + 22d+1s2(log n + k)) and time complexity O(2km1 + n2d log n)) . Proof. See the supplementary material. 5.2 Yahoo! Messenger User Communication Pattern Dataset We performed simulations using MATLAB on an Intel(R) Xeon(R) quad-core 3.6 GHz machine with 16 GB RAM and 10M cache. We run our algorithm on the Yahoo! Messenger User Communication Pattern Dataset [17]. This dataset contains the timestamped user communication data, i.e., information about a large number of messages sent over Yahoo! Messenger, for a duration of 28 days. Dataset: Each row represents a message. The first two columns show the day and time (time stamp) of the message respectively. The third and fifth columns show the ID of the transmitting and receiving users, respectively. The fourth column shows the zipcode (spatial stamp) from which this particular message is transmitted. The sixth column shows if the transmitter was in the contact list of the reciver user (y) or not (n). If a transmitter sends the same receiver more than one message from the same zipcode, only the first message is shown in the dataset. In total, there are 100000 unique users and 5649 unique zipcodes. We form a hypergraph from the dataset as follows: The transmitting users form the hyperedges and the receiving users form the nodes of the hypergraph. A hyperedge connects a set T of users if there is a transmitting user that sends a message to all the users in T. In any given time interval δt (short time interval) and small set of locations δx specified by the number of zip codes, there are few users who transmit (s) and they transmit to very few users (d). The complete set of nodes in the hypergraph (n) is taken to be those receiving users who are active during m consecutive intervals of length δt and in a set of δx zipcodes. This gives rise to a sparse graph. We identify the active set of transmitting users (hyperedges) and their corresponding receivers (nodes in these hyperedges) during a short time interval δt and a randomly selected space interval (δx, i.e., zip codes) from a large pool of receivers (nodes) that are observed during m intervals of length δt. Details of δt, m and δx chosen for experiments are given in Table 1. We note that n is in the order of 1000 usually. Remark: Our task is to learn the c−cut function from the random queries, i.e., random +/-1 assignment of variables and corresponding c−cut values. The generated sparse graph contains only hyperedges that have more than 1 node. Other hyperedges (transmitting users) with just one node in the sparse hypergraph are not taken into account. This is because a singleton hyperedge i is always counted in the c−cut function thereby effectively its presence is masked. First, we identify the relevant variables that participate in the sparse graph. After identifying this set of candidates, correlating the corresponding candidate parities with the function output yields the Fourier coefficient of that parity (see Algorithm 4). 7 Table 1: Runtime for different graphs. LG: LearnGraph, CS: Compressed sensing based alg. (a) Runtime for d = 4 and s = 1 graph. HHH Alg. n 88 159 288 556 1221 LG 1.96 2.13 2.23 2.79 4.94 CS 265.63 (b) Runtime for d = 4 and s = 3 graph. HHH Alg. n 52 104 246 412 1399 LG 1.91 2.08 2.08 2.30 4.98 CS 39.89 > 10823 (c) Simulation parameters for Fig. 1b Setting No. Interval # of Int. n max(d) max(s) Zip. Set Size Setting 1 5 min. 20 6822 10 19 20 Setting 2 20 sec. 200 5730 22 4 200 Setting 3 10 min. 10 6822 11 13 2 Setting 4 2 min. 50 6822 30 21 50 5.2.1 Performance Comparison with Compressed Sensing Approach First, we compare the runtime of our implementation LearnGraph with the compressed sensing based algorithm from [12]. Both algorithms correctly identify the relevant variables in all the considered range of parameters. The last step of finding the corresponding Fourier coefficients is omitted and can be easily implemented (Algorithm 4) without significantly affecting the running time. As can be seen in Tables 1a, 1b and Fig. 1a, LearnGraph scales well to graphs on thousands of nodes. On the contrary, the compressed sensing approach must handle a measurement matrix of size O(nd), which becomes prohibitively large on graphs involving more than a few hundred nodes. 5.2.2 Error Performance of LearnGraph Error probability (probability that the correct c−cut function is not recovered) versus the number of samples used is plotted for four different experimental settings of δt, δx and m in Fig. 1b. For each time interval, the error probability is calculated by averaging the number of errors among 100 different trials. For each value of α (number of samples), the error probability is averaged over time intervals to illustrate the error performance. We only keep the intervals for which the graph filtered with the considered zipcodes contains at least one user with more than one neighbor. We find that for the first 3 settings, the error probability decreases with more samples. For the fourth setting, d and s are very large and hence a large number of samples are required. For that reason, the error probability does not improve significantly. The probability of error can be reduced by repeating the experiment multiple times and taking a majority, at the cost of significantly more samples. Our plot shows only the probability of error without such a majority amplification. 6 Conclusions We presented a novel algorithm for learning sparse polynomials by random samples on the Boolean hypercube. While the general problem of learning all sparse polynomials is notoriously hard, we show that almost all sparse polynomials can be efficiently learned using our algorithm. This is because our unique sign property holds for randomly perturbed coefficients, in addition to several other natural settings. As an application, we show that graph and hypergraph sketching lead to sparse polynomial learning problems that always satisfy the unique sign property. This allows us to obtain efficient reconstruction algorthms that outperform the previous state of the art for these problems. An important open problem is to achieve the sample complexity of [12] while keeping the computational complexity polynomial in n. Acknowledgments M.K, K.S. and A.D. acknowledge the support of NSF via CCF 1422549, 1344364, 1344179 and DARPA STTR and a ARO YIP award. 8 References [1] E. Kushilevitz and Y. Mansour, “Learning decision trees using the Fourier spectrum,” in SIAM J. Comput., vol. 22, no. 6, 1993, pp. 1331–1348. [2] Y. Mansour, “Randomized interpolation and approximation of sparse polynomials,” in SIAM J. Comput., vol. 24, no. 2. Philadelphia, PA: Society for Industrial and Applied Mathematics, 1995, pp. 357–368. [3] R. Schapire and R. Sellie, “Learning sparse multivariate polynomials over a field with queries and counterexamples,” in JCSS: Journal of Computer and System Sciences, vol. 52, 1996. [4] A. C. Gilbert, S. Guha, P. Indyk, S. Muthukrishnan, and M. Strauss, “Near-optimal sparse Fourier representations via sampling,” in Proceedings of STOC, 2002, pp. 152–161. [5] P. Gopalan, A. Kalai, and A. Klivans, “Agnostically learning decision trees,” in Proceedings of STOC, 2008, pp. 527–536. [6] A. Akavia, “Deterministic sparse Fourier approximation via fooling arithmetic progressions,” in Proceedings of COLT, 2010, pp. 381–393. [7] D. Spielman and S. Teng, “Smoothed analysis of algorithms: Why the simplex algorithm usually takes polynomial time,” in JACM: Journal of the ACM, vol. 51, 2004. [8] P. Li, “Relational learning with hypergraphs,” Ph.D. dissertation, ´Ecole Polytechnique F´ed´erale de Lausanne, 2013. [9] E. J. Cand`es, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” Information Theory, IEEE Transactions on, vol. 52, no. 2, pp. 489–509, 2006. [10] E. J. Cand`es and T. Tao, “Decoding by linear programming,” Information Theory, IEEE Transactions on, vol. 51, no. 12, pp. 4203–4215, 2005. [11] D. L. Donoho, “Compressed sensing,” Information Theory, IEEE Transactions on, vol. 52, no. 4, pp. 1289–1306, 2006. [12] P. Stobbe and A. Krause, “Learning Fourier sparse set functions,” in Proceedings of the International Conference on Artificial Intelligence and Statistics, 2012, pp. 1125–1133. [13] S. Negahban and D. Shah, “Learning sparse boolean polynomials,” in Proceedings of the Communication, Control, and Computing (Allerton), 2012 50th Annual Allerton Conference on. IEEE, 2012, pp. 2032–2036. [14] A. Andoni, R. Panigrahy, G. Valiant, and L. Zhang, “Learning sparse polynomial functions,” in Proceedings of SODA, 2014. [15] A. T. Kalai, A. Samorodnitsky, and S.-H. Teng, “Learning and smoothed analysis,” in Proceedings of FOCS. IEEE Computer Society, 2009, pp. 395–404. [16] R. O’Donnell, Analysis of Boolean Functions. Cambridge University Press, 2014. [17] Yahoo, “Yahoo! webscope dataset ydata-ymessenger-user-communication-pattern-v1 0,” http: //research.yahoo.com/Academic Relations. 9
|
2014
|
173
|
5,262
|
The Noisy Power Method: A Meta Algorithm with Applications Moritz Hardt∗ IBM Research Almaden Eric Price† IBM Research Almaden Abstract We provide a new robust convergence analysis of the well-known power method for computing the dominant singular vectors of a matrix that we call the noisy power method. Our result characterizes the convergence behavior of the algorithm when a significant amount noise is introduced after each matrix-vector multiplication. The noisy power method can be seen as a meta-algorithm that has recently found a number of important applications in a broad range of machine learning problems including alternating minimization for matrix completion, streaming principal component analysis (PCA), and privacy-preserving spectral analysis. Our general analysis subsumes several existing ad-hoc convergence bounds and resolves a number of open problems in multiple applications: Streaming PCA. A recent work of Mitliagkas et al. (NIPS 2013) gives a spaceefficient algorithm for PCA in a streaming model where samples are drawn from a gaussian spiked covariance model. We give a simpler and more general analysis that applies to arbitrary distributions confirming experimental evidence of Mitliagkas et al. Moreover, even in the spiked covariance model our result gives quantitative improvements in a natural parameter regime. It is also notably simpler and follows easily from our general convergence analysis of the noisy power method together with a matrix Chernoff bound. Private PCA. We provide the first nearly-linear time algorithm for the problem of differentially private principal component analysis that achieves nearly tight worst-case error bounds. Complementing our worst-case bounds, we show that the error dependence of our algorithm on the matrix dimension can be replaced by an essentially tight dependence on the coherence of the matrix. This result resolves the main problem left open by Hardt and Roth (STOC 2013). The coherence is always bounded by the matrix dimension but often substantially smaller thus leading to strong average-case improvements over the optimal worst-case bound. 1 Introduction Computing the dominant singular vectors of a matrix is one of the most important algorithmic tasks underlying many applications including low-rank approximation, PCA, spectral clustering, dimensionality reduction, matrix completion and topic modeling. The classical problem is wellunderstood, but many recent applications in machine learning face the fundamental problem of approximately finding singular vectors in the presence of noise. Noise can enter the computation through a variety of sources including sampling error, missing entries, adversarial corruptions and privacy constraints. It is desirable to have one robust method for handling a variety of cases without the need for ad-hoc analyses. In this paper we consider the noisy power method, a fast general purpose method for computing the dominant singular vectors of a matrix when the target matrix can only be accessed through inaccurate matrix-vector products. ∗Email: mhardt@us.ibm.com †Email: ecprice@cs.utexas.edu 1 Figure 1 describes the method when the target matrix A is a symmetric d×d matrix—a generalization to asymmetric matrices is straightforward. The algorithm starts from an initial matrix X0 ∈Rd×p and iteratively attempts to perform the update rule Xℓ→AXℓ. However, each such matrix product is followed by a possibly adversarially and adaptively chosen perturbation Gℓleading to the update rule Xℓ→AXℓ+ Gℓ. It will be convenient though not necessary to maintain that Xℓhas orthonormal columns which can be achieved through a QR-factorization after each update. Input: Symmetric matrix A ∈Rd×d, number of iterations L, dimension p 1. Choose X0 ∈Rd×p. 2. For ℓ= 1 to L: (a) Yℓ←AXℓ−1 + Gℓwhere Gℓ∈Rd×p is some perturbation (b) Let Yℓ= XℓRℓbe a QR-factorization of Yℓ Output: Matrix XL Figure 1: Noisy Power Method (NPM) The noisy power method is a meta algorithm that when instantiated with different settings of Gℓ and X0 adapts to a variety of applications. In fact, there have been a number of recent surprising applications of the noisy power method: 1. Jain et al. [JNS13, Har14] observe that the update rule of the well-known alternating least squares heuristic for matrix completion can be considered as an instance of NPM. This lead to the first provable convergence bounds for this important heuristic. 2. Mitgliakas et al. [MCJ13] observe that NPM applies to a streaming model of principal component analysis (PCA) where it leads to a space-efficient and practical algorithm for PCA in settings where the covariance matrix is too large to process directly. 3. Hardt and Roth [HR13] consider the power method in the context of privacy-preserving PCA where noise is added to achieve differential privacy. In each setting there has so far only been an ad-hoc analysis of the noisy power method. In the first setting, only local convergence is argued, that is, X0 has to be cleverly chosen. In the second setting, the analysis only holds for the spiked covariance model of PCA. In the third application, only the case p = 1 was considered. In this work we give a completely general analysis of the noisy power method that overcomes limitations of previous analyses. Our result characterizes the global convergence properties of the algorithm in terms of the noise Gℓand the initial subspace X0. We then consider the important case where X0 is a randomly chosen orthonormal basis. This case is rather delicate since the initial correlation between a random matrix X0 and the target subspace is vanishing in the dimension d for small p. Another important feature of the analysis is that it shows how Xℓconverges towards the first k ⩽p singular vectors. Choosing p to be larger than the target dimension leads to a quantitatively stronger result. Theorem 2.3 formally states our convergence bound. Here we highlight one useful corollary to illustrate our more general result. Corollary 1.1. Let k ⩽p. Let U ∈Rd×k represent the top k singular vectors of A and let σ1 ⩾· · · ⩾σn ⩾0 denote its singular values. Suppose X0 is an orthonormal basis of a random p-dimensional subspace. Further suppose that at every step of NPM we have 5∥Gℓ∥⩽ε(σk −σk+1) and 5∥U ⊤Gℓ∥⩽(σk −σk+1) √p−√k−1 τ √ d for some fixed parameter τ and ε < 1/2. Then with all but τ −Ω(p+1−k) + e−Ω(d) probability, there exists an L = O( σk σk−σk+1 log(dτ/ε)) so that after L steps we have that
(I −XLX⊤ L )U
⩽ε. The corollary shows that the algorithm converges in the strong sense that the entire spectral norm of U up to an ε error is contained in the space spanned by XL. To achieve this the result places two assumptions on the magnitude of the noise. The total spectral norm of Gℓmust be bounded by ε times the separation between σk and σk+1. This dependence on the singular value separation arises even in the classical perturbation theory of Davis-Kahan [DK70]. The second condition is specific to the power method and requires that the noise term is proportionally smaller when projected onto the space spanned by the top k singular vectors. This condition ensures that the correlation between Xℓ 2 and U that is initially very small is not destroyed by the noise addition step. If the noise term has some spherical properties (e.g. a Gaussian matrix), we expect the projection onto U to be smaller by a factor of p k/d, since the space U is k-dimensional. In the case where p = k + Ω(k) this is precisely what the condition requires. When p = k the requirement is stronger by a factor of k. This phenomenon stems from the fact that the smallest singular value of a random p × k gaussian matrix behaves differently in the square and the rectangular case. We demonstrate the usefulness of our convergence bound with several novel results in some of the aforementioned applications. 1.1 Application to memory-efficient streaming PCA In the streaming PCA setting we receive a stream of samples z1, z2, . . . zn ∈Rd drawn i.i.d. from an unknown distribution D over Rd. Our goal is to compute the dominant k eigenvectors of the covariance matrix A = Ez∼D zz⊤. The challenge is to do this in space linear in the output size, namely O(kd). Recently, Mitgliakas et al. [MCJ13] gave an algorithm for this problem based on the noisy power method. We analyze the same algorithm, which we restate here and call SPM: Input: Stream of samples z1, z2, . . . , zn ∈Rd, iterations L, dimension p 1. Let X0 ∈Rd×p be a random orthonormal basis. Let T = ⌊m/L⌋ 2. For ℓ= 1 to L: (a) Compute Yℓ= AℓXℓ−1 where Aℓ= PℓT i=(ℓ−1)T +1 ziz⊤ i (b) Let Yℓ= XℓRℓbe a QR-factorization of Yℓ Output: Matrix XL Figure 2: Streaming Power Method (SPM) The algorithm can be executed in space O(pd) since the update step can compute the d × p matrix AℓXℓ−1 incrementally without explicitly computing Aℓ. The algorithm maps to our setting by defining Gℓ= (Aℓ−A)Xℓ−1. With this notation Yℓ= AXℓ−1 + Gℓ. We can apply Corollary 1.1 directly once we have suitable bounds on ∥Gℓ∥and ∥U ⊤Gℓ∥. The result of [MCJ13] is specific to the spiked covariance model. The spiked covariance model is defined by an orthonormal basis U ∈Rd×k and a diagonal matrix Λ ∈Rk×k with diagonal entries λ1 ⩾λ2 ⩾· · · ⩾λk > 0. The distribution D(U, Λ) is defined as the normal distribution N(0, (UΛ2U ⊤+ σ2Idd×d)). Without loss of generality we can scale the examples such that λ1 = 1. One corollary of our result shows that the algorithm outputs XL such that
(I −XLX⊤ L )U
⩽ε with probability 9/10 provided p = k + Ω(k) and the number of samples satisfies n = Θ σ6 + 1 ε2λ6 k kd . Previously, the same bound1 was known with a quadratic dependence on k in the case where p = k. Here we can strengthen the bound by increasing p slightly. While we can get some improvements even in the spiked covariance model, our result is substantially more general and applies to any distribution. The sample complexity bound we get varies according to a technical parameter of the distribution. Roughly speaking, we get a near linear sample complexity if the distribution is either “round” (as in the spiked covariance setting) or is very well approximated by a k dimensional subspace. To illustrate the latter condition, we have the following result without making any assumptions other than scaling the distribution: Corollary 1.2. Let D be any distribution scaled so that Pr {∥z∥> t} ⩽exp(−t) for every t ⩾1. Let U represent the top k eigenvectors of the covariance matrix E zz⊤and σ1 ⩾· · · ⩾σd ⩾0 its eigenvalues. Then, SPM invoked with p = k + Ω(k) outputs a matrix XL such with probability 9/10 we have
(I −XLX⊤ L )U
⩽ε provided SPM receives n samples where n satisfies n = ˜O σk ε2k(σk−σk+1)3 · d . 1That the bound stated in [MCJ13] has a σ6 dependence is not completely obvious. There is a O(σ4) in the numerator and log((σ2 + 0.75λ2 k)/(σ2 + 0.5λ2 k)) in the denominator which simplifies to O(1/σ2) for constant λk and σ2 ⩾1. 3 The corollary establishes a sample complexity that’s linear in d provided that the spectrum decays quickly, as is common in applications. For example, if the spectrum follows a power law so that σj ≈j−c for a constant c > 1/2, the bound becomes n = ˜O(k2c+2d/ε2). 1.2 Application to privacy-preserving spectral analysis Many applications of singular vector computation are plagued by the fact that the underlying matrix contains sensitive information about individuals. A successful paradigm in privacy-preserving data analysis rests on the notion of differential privacy which requires all access to the data set to be randomized in such a way that the presence or absence of a single data item is hidden. The notion of data item varies and could either refer to a single entry, a single row, or a rank-1 matrix of bounded norm. More formally, Differential Privacy requires that the output distribution of the algorithm changes only slightly with the addition or deletion of a single data item. This requirement often necessitates the introduction of significant levels of noise that make the computation of various objectives challenging. Differentially private singular vector computation has been studied actively since the work of Blum et al. [BDMN05]. There are two main objectives. The first is computational efficiency. The second objective is to minimize the amount of error that the algorithm introduces. In this work, we give a fast algorithm for differentially private singular vector computation based on the noisy power method that leads to nearly optimal bounds in a number of settings that were considered in previous work. The algorithm is described in Figure 3. It’s a simple instance of NPM in which each noise matrix Gℓis a gaussian random matrix scaled so that the algorithm achieves (ε, δ)-differential privacy (as formally defined in Definition E.1). It is easy to see that the algorithm can be implemented in time nearly linear in the number of nonzero entries of the input matrix (input sparsity). This will later lead to strong improvements in running time compared with several previous works. Input: Symmetric A ∈Rd×d, L, p, privacy parameters ε, δ > 0 1. Let X0 be a random orthonormal basis and put σ = ε−1p 4pL log(1/δ) 2. For ℓ= 1 to L: (a) Yℓ←AXℓ−1 + Gℓwhere Gℓ∼N(0, ∥Xℓ−1∥2 ∞σ2)d×p. (b) Compute the QR-factorization Yℓ= XℓRℓ Output: Matrix XL Figure 3: Private Power Method (PPM). Here ∥X∥∞= maxij |Xij|. We first state a general purpose analysis of PPM that follows from Corollary 1.1. Theorem 1.3. Let k ⩽p. Let U ∈Rd×k represent the top k singular vectors of A and let σ1 ⩾· · · ⩾σd ⩾0 denote its singular values. Then, PPM satisfies (ε, δ)-differential privacy and after L = O( σk σk−σk+1 log(d)) iterations we have with probability 9/10 that
(I −XLX⊤ L )U
⩽O σ max ∥Xℓ∥∞ √d log L σk −σk+1 · √p √p − √ k −1 ! . When p = k + Ω(k) the trailing factor becomes a constant. If p = k it creates a factor k overhead. In the worst-case we can always bound ∥Xℓ∥∞by 1 since Xℓis an orthonormal basis. However, in principle we could hope that a much better bound holds provided that the target subspace U has small coordinates. Hardt and Roth [HR12, HR13] suggested a way to accomplish a stronger bound by considering a notion of coherence of A, denoted as µ(A). Informally, the coherence is a well-studied parameter that varies between 1 and n, but is often observed to be small. Intuitively, the coherence measures the correlation between the singular vectors of the matrix with the standard basis. Low coherence means that the singular vectors have small coordinates in the standard basis. Many results on matrix completion and robust PCA crucially rely on the assumption that the underlying matrix has low coherence [CR09, CT10, CLMW11] (though the notion of coherence here will be somewhat different). 4 Theorem 1.4. Under the assumptions of Theorem 1.3, we have the conclusion
(I −XLX⊤ L )U
⩽O σ p µ(A) log d log L σk −σk+1 · √p √p − √ k −1 ! . Hardt and Roth proved this result for the case where p = 1. The extension to p > 1 lost a factor of √ d in general and therefore gave no improvement over Theorem 1.3. Our result resolves the main problem left open in their work. The strength of Theorem 1.4 is that the bound is essentially dimension-free under a natural assumption on the matrix and never worse than our worst-case result. It is also known that in general the dependence on d achieved in Theorem 1.3 is best possible in the worst case (see discussion in [HR13]) so that further progress requires making stronger assumptions. Coherence is a natural such assumption. The proof of Theorem 1.4 proceeds by showing that each iterate Xℓsatisfies ∥Xℓ∥∞⩽O( p µ(A) log(d)/d) and applying Theorem 1.3. To do this we exploit a non-trivial symmetry of the algorithm that we discuss in Section E.3. Other variants of differential privacy. Our discussion above applied to (ε, δ)-differential privacy under changing a single entry of the matrix. Several works consider other variants of differential privacy. It is generally easy to adapt the power method to these settings by changing the noise distribution or its scaling. To illustrate this aspect, we consider the problem of privacy-preserving principal component analysis as recently studied by [CSS12, KT13]. Both works consider an algorithm called exponential mechanism. The first work gives a heuristic implementation that may not converge, while the second work gives a provably polynomial time algorithm though the running time is more than cubic. Our algorithm gives strong improvements in running time while giving nearly optimal accuracy guarantees as it matches a lower bound of [KT13] up to a ˜O( √ k) factor. We also improve the error dependence on k by polynomial factors compared to previous work. Moreover, we get an accuracy improvement of O( √ d) for the case of (ε, δ)-differential privacy, while these previous works only apply to (ε, 0)-differential privacy. Section E.2 provides formal statements. 1.3 Related Work Numerical Analysis. One might expect that a suitable analysis of the noisy power method would have appeared in the numerical analysis literature. However, we are not aware of a reference and there are a number of points to consider. First, our noise model is adaptive thus setting it apart from the classical perturbation theory of the singular vector decomposition [DK70]. Second, we think of the perturbation at each step as large making it conceptually different from floating point errors. Third, research in numerical analysis over the past decades has largely focused on faster Krylov subspace methods. There is some theory of inexact Krylov methods, e.g., [SS07] that captures the effect of noisy matrix-vector products in this context. Related to our work are also results on the perturbation stability of the QR-factorization since those could be used to obtain convergence bounds for subspace iteration. Such bounds, however, must depend on the condition number of the matrix that the QR-factorization is applied to. See Chapter 19.9 in [Hig02] and the references therein for background. Our proof strategy avoids this particular dependence on the condition number. Streaming PCA. PCA in the streaming model is related to a host of well-studied problems that we cannot survey completely here. We refer to [ACLS12, MCJ13] for a thorough discussion of prior work. Not mentioned therein is a recent work on incremental PCA [BDF13] that leads to space efficient algorithms computing the top singular vector; however, it’s not clear how to extend their results to computing multiple singular vectors. Privacy. There has been much work on differentially private spectral analysis starting with Blum et al. [BDMN05] who used an algorithm known as Randomized Response which adds a single noise matrix N either to the input matrix A or the covariance matrix AA⊤. This approach appears in a number of papers, e.g. [MM09]. While often easy to analyze it has the disadvantage that it converts sparse matrices to dense matrices and is often impractical on large data sets. Chaudhuri et al. [CSS12] and Kapralov-Talwar [KT13] use the so-called exponential mechanism to sample approximate eigenvectors of the matrix. The sampling is done using a heuristic approach without convergence polynomial time convergence guarantees in the first case and using a polynomial time algorithm in the second. Both papers achieve a tight dependence on the matrix dimension d (though 5 the dependence on k is suboptimal in general). Most closely related to our work are the results of Hardt and Roth [HR13, HR12] that introduced matrix coherence as a way to circumvent existing worst-case lower bounds on the error. They also analyzed a natural noisy variant of power iteration for the case of computing the dominant eigenvector of A. When multiple eigenvectors are needed, their algorithm uses the well-known deflation technique. However, this step loses control of the coherence of the original matrix and hence results in suboptimal bounds. In fact, a p rank(A) factor is lost. 1.4 Open Questions We believe Corollary 1.1 to be a fairly precise characterization of the convergence of the noisy power method to the top k singular vectors when p = k. The main flaw is that the noise tolerance depends on the eigengap σk −σk+1, which could be very small. We have some conjectures for results that do not depend on this eigengap. First, when p > k, we think that Corollary 1.1 might hold using the gap σk −σp+1 instead of σk −σk+1. Unfortunately, our proof technique relies on the principal angle decreasing at each step, which does not necessarily hold with the larger level of noise. Nevertheless we expect the principal angle to decrease fairly fast on average, so that XL will contain a subspace very close to U. We are actually unaware of this sort of result even in the noiseless setting. Conjecture 1.5. Let X0 be a random p-dimensional basis for p > k. Suppose at every step we have 100∥Gℓ∥⩽ε(σk −σp+1) and 100∥U T Gℓ∥⩽ √p − √ k −1 √ d Then with high probability, after L = O( σk σk−σp+1 log(d/ε)) iterations we have ∥(I −XLX⊤ L )U∥⩽ε. The second way of dealing with a small eigengap would be to relax our goal. Corollary 1.1 is quite stringent in that it requires XL to approximate the top k singular vectors U, which gets harder when the eigengap approaches zero and the kth through p+1st singular vectors are nearly indistinguishable. A relaxed goal would be for XL to spectrally approximate A, that is ∥(I −XLX⊤ L )A∥⩽σk+1 + ε. (1) This weaker goal is known to be achievable in the noiseless setting without any eigengap at all. In particular, [?] shows that (1) happens after L = O( σk+1 ε log n) steps in the noiseless setting. A plausible extension to the noisy setting would be: Conjecture 1.6. Let X0 be a random 2k-dimensional basis. Suppose at every step we have ∥Gℓ∥⩽ε and ∥U T Gℓ∥⩽ε p k/d Then with high probability, after L = O( σk+1 ε log d) iterations we have that ∥(I −XLX⊤ L )A∥⩽σk+1 + O(ε). 1.5 Organization All proofs can be found in the supplementary material. In the remaining space, we limit ourselves to a more detailed discussion of our convergence analysis and the application to streaming PCA. The entire section on privacy is in the supplementary materials in Section E. 2 Convergence of the noisy power method Figure 1 presents our basic algorithm that we analyze in this section. An important tool in our analysis are principal angles, which are useful in analyzing the convergence behavior of numerical eigenvalue methods. Roughly speaking, we will show that the tangent of the k-th principal angle between X and the top k eigenvectors of A decreases as σk+1/σk in each iteration of the noisy power method. 6 Definition 2.1 (Principal angles). Let X and Y be subspaces of Rd of dimension at least k. The principal angles 0 ⩽θ1 ⩽· · · ⩽θk between X and Y and associated principal vectors x1, . . . , xk and y1, . . . , yk are defined recursively via θi(X, Y) = min arccos ⟨x, y⟩ ∥x∥2∥y∥2 : x ∈X, y ∈Y, x ⊥xj, y ⊥yj for all j < i and xi, yi are the x and y that give this value. For matrices X and Y , we use θk(X, Y ) to denote the kth principal angle between their ranges. 2.1 Convergence argument Fix parameters 1 ⩽k ⩽p ⩽d. In this section we consider a symmetric d × d matrix A with singular values σ1 ⩾σ2 ⩾· · · ⩾σd. We let U ∈Rd×k contain the first k eigenvectors of A. Our main lemma shows that tan θk(U, X) decreases multiplicatively in each step. Lemma 2.2. Let U contain the largest k eigenvectors of a symmetric matrix A ∈Rd×d, and let X ∈Rd×p for p ⩾k. Let G ∈Rd×p satisfy 4∥U ⊤G∥⩽(σk −σk+1) cos θk(U, X) 4∥G∥⩽(σk −σk+1)ε. for some ε < 1. Then tan θk(U, AX + G) ⩽max ε, max ε, σk+1 σk 1/4! tan θk(U, X) ! . We can inductively apply the previous lemma to get the following general convergence result. Theorem 2.3. Let U represent the top k eigenvectors of the matrix A and γ = 1−σk+1/σk. Suppose that the initial subspace X0 and noise Gℓis such that 5∥U ⊤Gℓ∥⩽(σk −σk+1) cos θk(U, X0) 5∥Gℓ∥⩽ε(σk −σk+1) at every stage ℓ, for some ε < 1/2. Then there exists an L ≲1 γ log tan θk(U,X0) ε such that for all ℓ⩾L we have tan θ(U, XL) ⩽ε. 2.2 Random initialization The next lemma essentially follows from bounds on the smallest singular value of gaussian random matrices [RV09]. Lemma 2.4. For an arbitrary orthonormal U and random subspace X, we have tan θk(U, X) ⩽τ √ d √p − √ k −1 with all but τ −Ω(p+1−k) + e−Ω(d) probability. With this lemma we can prove the corollary that we stated in the introduction. Proof of Corollary 1.1. By Lemma 2.4, with the desired probability we have tan θk(U, X0) ⩽ τ √ d √p−√k−1. Hence cos θk(U, X0) ⩾1/(1 + tan θk(U, X0)) ⩾ √p−√k−1 2·τ √ d . Rescale τ and apply Theorem 2.3 to get that tan θk(U, XL) ⩽ε. Then ∥(I −XLX⊤ L )U∥= sin θk(U, XL) ⩽ tan θk(U, XL) ⩽ε. ■ 7 3 Memory efficient streaming PCA In the streaming PCA setting we receive a stream of samples z1, z2, · · · ∈Rd. Each sample is drawn i.i.d. from an unknown distribution D over Rd. Our goal is to compute the dominant k eigenvectors of the covariance matrix A = Ez∼D zz⊤. The challenge is to do this with small space, so we cannot store the d2 entries of the sample covariance matrix. We would like to use O(dk) space, which is necessary even to store the output. The streaming power method (Figure 2, introduced by [MCJ13]) is a natural algorithm that performs streaming PCA with O(dk) space. The question that arises is how many samples it requires to achieve a given level of accuracy, for various distributions D. Using our general analysis of the noisy power method, we show that the streaming power method requires fewer samples and applies to more distributions than was previously known. We analyze a broad class of distributions: Definition 3.1. A distribution D over Rd is (B, p)-round if for every p-dimensional projection P and all t ⩾1 we have Prz∼D {∥z∥> t} ⩽exp(−t) and Prz∼D n ∥Pz∥> t · p Bp/d o ⩽exp(−t) . The first condition just corresponds to a normalization of the samples drawn from D. Assuming the first condition holds, the second condition always holds with B = d/p. For this reason our analysis in principle applies to any distribution, but the sample complexity will depend quadratically on B. Let us illustrate this definition through the example of the spiked covariance model studied by [MCJ13]. The spiked covariance model is defined by an orthonormal basis U ∈Rd×k and a diagonal matrix Λ ∈Rk×k with diagonal entries λ1 ⩾λ2 ⩾· · · ⩾λk > 0. The distribution D(U, Λ) is defined as the normal distribution N(0, (UΛ2U ⊤+ σ2Idd×d)/D) where D = Θ(dσ2 + P i λ2 i ) is a normalization factor chosen so that the distribution satisfies the norm bound. Note that the the i-th eigenvalue of the covariance matrix is σi = (λ2 i + σ2)/D for 1 ⩽i ⩽k and σi = σ2/D for i > k. We show in Lemma D.2 that the spiked covariance model D(U, Λ) is indeed (B, p)-round for B = O( λ2 1+σ2 tr(Λ)/d+σ2 ), which is constant for σ ≳λ1. We have the following main theorem. Theorem 3.2. Let D be a (B, p)-round distribution over Rd with covariance matrix A whose eigenvalues are σ1 ⩾σ2 ⩾· · · ⩾σd ⩾0. Let U ∈Rd×k be an orthonormal basis for the eigenvectors corresponding to the first k eigenvalues of A. Then, the streaming power method SPM returns an orthonormal basis X ∈Rd×p such that tan θ(U, X) ⩽ε with probability 9/10 provided that SPM receives n samples from D for some n satisfying n ⩽˜O B2σkk log2 d ε2(σk −σk+1)3d if p = k + Θ(k). More generally, for all p ⩾k one can get the slightly stronger result n ⩽˜O Bpσk max{1/ε2, Bp/(√p − √ k −1)2} log2 d (σk −σk+1)3d ! . Instantiating with the spiked covariance model gives the following: Corollary 3.3. In the spiked covariance model D(U, Λ) the conclusion of Theorem 3.2 holds for p = 2k with n = ˜O (λ2 1 + σ2)2(λ2 k + σ2) ε2λ6 k dk . When λ1 = O(1) and λk = Ω(1) this becomes n = ˜O σ6+1 ε2 · dk . We can apply Theorem 3.2 to all distributions that have exponentially concentrated norm by setting B = d/p. This gives the following result. Corollary 3.4. Let D be any distribution scaled such that Prz∼D[∥z∥> t] ⩽exp(−t) for all t ⩾1. Then the conclusion of Theorem 3.2 holds for p = 2k with n = ˜O σk ε2k(σk −σk+1)3 · d . If the eigenvalues follow a power law, σj ≈j−c for a constant c > 1/2, this gives an n = ˜O(k2c+2d/ε2) bound on the sample complexity. 8 References [ACLS12] Raman Arora, Andrew Cotter, Karen Livescu, and Nathan Srebro. Stochastic optimization for pca and pls. In Communication, Control, and Computing (Allerton), 2012 50th Annual Allerton Conference on, pages 861–868. IEEE, 2012. [BDF13] Akshay Balsubramani, Sanjoy Dasgupta, and Yoav Freund. The fast convergence of incremental PCA. In Proc. 27th Neural Information Processing Systems (NIPS), pages 3174–3182, 2013. [BDMN05] Avrim Blum, Cynthia Dwork, Frank McSherry, and Kobbi Nissim. Practical privacy: the SuLQ framework. In Proc. 24th PODS, pages 128–138. ACM, 2005. [CLMW11] Emmanuel J. Candès, Xiaodong Li, Yi Ma, and John Wright. Robust principal component analysis? J. ACM, 58(3):11, 2011. [CR09] Emmanuel J. Candès and Benjamin Recht. Exact matrix completion via convex optimization. Foundations of Computional Mathematics, 9:717–772, December 2009. [CSS12] Kamalika Chaudhuri, Anand Sarwate, and Kaushik Sinha. Near-optimal differentially private principal components. In Proc. 26th Neural Information Processing Systems (NIPS), 2012. [CT10] Emmanuel J. Candès and Terence Tao. The power of convex relaxation: near-optimal matrix completion. IEEE Transactions on Information Theory, 56(5):2053–2080, 2010. [DK70] Chandler Davis and W. M. Kahan. The rotation of eigenvectors by a perturbation. iii. SIAM J. Numer. Anal., 7:1–46, 1970. [Har14] Moritz Hardt. Understanding alternating minimization for matrix completion. In Proc. 55th Foundations of Computer Science (FOCS). IEEE, 2014. [Hig02] Nicholas J. Higham. Accuracy and Stability of Numerical Algorithms. Society for Industrial and Applied Mathematics, 2002. [HR12] Moritz Hardt and Aaron Roth. Beating randomized response on incoherent matrices. In Proc. 44th Symposium on Theory of Computing (STOC), pages 1255–1268. ACM, 2012. [HR13] Moritz Hardt and Aaron Roth. Beyond worst-case analysis in private singular vector computation. In Proc. 45th Symposium on Theory of Computing (STOC). ACM, 2013. [JNS13] Prateek Jain, Praneeth Netrapalli, and Sujay Sanghavi. Low-rank matrix completion using alternating minimization. In Proc. 45th Symposium on Theory of Computing (STOC), pages 665–674. ACM, 2013. [KT13] Michael Kapralov and Kunal Talwar. On differentially private low rank approximation. In Proc. 24rd Symposium on Discrete Algorithms (SODA). ACM-SIAM, 2013. [MCJ13] Ioannis Mitliagkas, Constantine Caramanis, and Prateek Jain. Memory limited, streaming PCA. In Proc. 27th Neural Information Processing Systems (NIPS), pages 2886– 2894, 2013. [MM09] Frank McSherry and Ilya Mironov. Differentially private recommender systems: building privacy into the net. In Proc. 15th KDD, pages 627–636. ACM, 2009. [RV09] Mark Rudelson and Roman Vershynin. Smallest singular value of a random rectangular matrix. Communications on Pure and Applied Mathematics, 62(12):1707–1739, 2009. [SS07] Valeria Simoncini and Daniel B. Szyld. Recent computational developments in krylov subspace methods for linear systems. Numerical Linear Algebra With Applications, 14:1–59, 2007. 9
|
2014
|
174
|
5,263
|
Robust Tensor Decomposition with Gross Corruption Quanquan Gu∗ Department of Operations Research and Financial Engineering Princeton University Princeton, NJ 08544 qgu@princeton.edu Huan Gui∗Jiawei Han Department of Computer Science University of Illinois at Urbana-Champaign Urbana, IL 61801 {huangui2,hanj}@illinois.edu Abstract In this paper, we study the statistical performance of robust tensor decomposition with gross corruption. The observations are noisy realization of the superposition of a low-rank tensor W∗and an entrywise sparse corruption tensor V∗. Unlike conventional noise with bounded variance in previous convex tensor decomposition analysis, the magnitude of the gross corruption can be arbitrary large. We show that under certain conditions, the true low-rank tensor as well as the sparse corruption tensor can be recovered simultaneously. Our theory yields nonasymptotic Frobenius-norm estimation error bounds for each tensor separately. We show through numerical experiments that our theory can precisely predict the scaling behavior in practice. 1 Introduction Tensor data analysis have witnessed increasing applications in machine learning, data mining and computer vision. For example, an ensemble of face images can be modeled as a tensor, whose mode corresponds to pixels, subjects, illumination and viewpoint [23]. Traditional tensor decomposition methods such as Tucker decomposition and CANDECOMP/PARAFAC(CP) decomposition [14, 13] aim to factorize an input tensor into a number of low-rank factors. However, they are prone to local optima because they are solving essentially non-convex optimization problems. In order to address this problem, [15] [20] extended the trace norm of matrices [19] to tensors, and generalized convex matrix completion [8] [7] and matrix decomposition [6] to convex tensor completion/decomposition. For example, the goal of tensor decomposition aims to accurately estimate a low-rank tensor W ∈ Rn1×...×nK from the noisy observation tensor Y ∈Rn1×...×nK that is contaminated by dense noises, i.e., Y = W∗+ E, where W∗∈Rn1×...×nK is a low-rank tensor, E ∈Rn1×...×nK is a noise tensor whose entries are i.i.d. Gaussian noise with zero mean and bounded variance σ2, i.e., Ei1,...,iK ∼N(0, σ2). [22] [21] analyzed the statistical performance of convex tensor decomposition under different extensions of trace norm. They showed that, under certain conditions, the estimation error scales with the rank of the true tensor W∗. Furthermore, they demonstrated that given a noisy tensor, the true low-rank tensor can be recovered under restricted strong convexity assumption [18]. However, all these algorithms [15] [20] and theoretical results [22] [21] reply on the assumption that the observation noise has a bounded variance σ2. Without this assumption, we are not able to identify the rank of W∗, and therefore the estimated low-rank tensor c W could be very far from the true tensor W∗. On the other hand, in many practical applications such as face recognition and image/video denoising, a portion of the observation tensor Y might be contaminated by gross error due to illumination, occlusion or pepper/salt noise. This scenario is not covered by finite variance noise assumption, therefore new mathematical models are demanded to address this problem. This motivates us to study ∗Equal Contribution 1 convex tensor decomposition with gross corruption. It is clear that if all the entries of a tensor are corrupted by large error, there is no hope to recover the underlying low-rank tensor. To overcome this problem, one common assumption is that the gross corruption is sparse. Under this assumption, together with previous low-rank assumption, we formalize the noisy linear observation model as follows: Y = W∗+ V∗+ E, (1) where W∗∈Rn1×...×nK is a low-rank tensor, V∗∈Rn1×...×nK is a sparse corruption tensor, where the locations of nonzero entries are unknown and the magnitudes of the nonzero entries can be arbitrarily large, and E ∈Rn1×...×nK is a noise tensor whose entries are i.i.d. Gaussian noise with zero mean and bounded variance σ2, and thus dense. Our goal is to recover the low-rank tensor W∗, as well as the sparse corruption tensor V∗. Note that in some applications, the corruption tensor is of independent interest and needs to be recovered. Given the observation model in (1), and the low-rank as well as sparse assumptions on W∗and E∗ respectively, we propose the following convex minimization to estimate the unknown low-rank tensor W∗and the sparse corruption tensor E∗simultaneously: arg min W,V |||Y −W −V|||2 F + λM |||W|||S1 + µM |||V|||1 , (2) where |||·|||S1 is tensor Schatten-1 norm [22], |||·|||1 is entry-wise ℓ1 norm of tensors, and λM and µM are positive regularization parameters. We call this optimization Robust Tensor Decomposition, which can been seen as a generalization of convex tensor decomposition in [15] [20] [22]. The regularization associated with the E encourages sparsity on the corruption tensor, where parameter µM controls the sparsity level. In this paper, we focus on the following questions: under what conditions for the size of the tensor, the rank of the tensor, and the fraction (sparsity level) of the corruption so that: (i) (2) is able to recover W∗and V∗with small estimator error? (ii) (2) is able to recover the exact rank of W∗and the support of V∗? We will present nonasymptotic error bounds to answer these questions. Experiments on synthetic datasets validate our theoretical results. The rest of this paper is arranged as follows. Related work is discussed in Section 2. Section 3 introduces the background and notations. Section 4 presents the main results. Section 5 provides an ADMM algorithm to solve the problem, followed by the numerical experiments in Section 6. Section 7 concludes this work with remarks. 2 Related Work The problem of recovering the data under gross error has gained many attentions recently in matrix decomposition. A large body of work have been proposed and analyzed statistically. For example, [9] considered the problem of recovering an unknown low-rank and an unknown sparse matrix, given the sum of the two matrices. [5] proposed a similar problem, namely robust principal component analysis (RPCA), which studies the problem of recovering the low-rank and sparse matrices by solving a convex program. [10] studied multi-task regression which decomposes the coefficient matrix into two matrices, and imposes different group sparse regularization on two matrices. [25] considered more general case, where the parameter matrix could be the superposition of more than two matrices with different structurally constraints. Our paper extends [5] from two perspective: we extend the problem from matrices to high-order tensors, and consider the additional noise setting. We notice that [16] extended RPCA to tensors, which aims to recover the low-rank and sparse tensors by solving a constrained convex program. However, our formulation departs from [16] in that we consider not only the sparse corruption, but also the dense noise. We also note that low-rank noisy matrix completion [17] and robust matrix decomposition [1] [12] have been studied in in the high dimensional setting as well. Our model can be seen as the high-order extension of robust matrix decomposition. This extension is nontrivial, because the treatment of the tensor trace norm (Schatten-1 norm) is more complicated. More importantly, for the robust matrix decomposition problem considered [1], only the sum of error bound of two matrices (low-rank matrix and the sparse corruption matrix) can be obtained under the assumption of restricted strongly convexity. In contrast, under a different condition, our analysis provides error bound for each tensor component (low-rank tensor and the sparse corruption tensor) separately, making our results more appealing in practice and of independent theoretical interest. Since the problem in [1] is a special case of our problem, our 2 technical tool can be directly applied to their problem and yields new error bounds on the low-rank matrix as well as the sparse corruption matrix separately. 3 Notation and Background Before proceeding, we define our notation and state assumptions that will appear in various parts of the analysis. For more details about tensor algebra, please refer to [14]. Scalars are denoted by lower case letters (a, b, . . .), vectors by bold lower case letters (a, b, . . .), matrices by bold upper case letters (A, B, . . .), and high-order tensors by calligraphic upper case letters (A, B, . . .). A tensor is a higher order generalization of a vector (first order tensor) and a matrix (second order tensor). From a multi-linear algebra view, tensor is a multi-linear mapping over a set of vector spaces. The order of tensor A ∈Rn1×...×n2×...×nK is K, where nk is the dimensionality of the k-th order. Elements of A are denoted as Ai1...ik...in, 1 ≤ik ≤nk. We denote the number of elements in A by N = QK k=1 nk. The mode-k vectors of a K order tensor A are the nk dimensional vectors obtained from A by varying index ik while keeping the other indices fixed. The mode-k vectors are the column vectors of mode-k flattening matrix A(k) ∈Rnk×(n1...nk−1nk+1...nK) that results by mode-k flattening the tensor A. For example, matrix column vectors are referred to as mode-1 vectors and matrix row vectors are referred to as mode-2 vectors. The scalar product of two tensors A, B ∈ Rn1...n2...nK, is defined as ⟨A, B⟩ = P i1 . . . P iK Ai1...iKBi1...iK = vec(A)vec(B), where vec(·) is a vectorization. The Frobenius norm of a tensor A is |||A|||F = p ⟨A, A⟩. There are multiple ways to define tensor rank. In this paper, following [22], we define the rank of a tensor based on the mode-k rank of a tensor. More specifically, the mode-k rank of a tensor X, denoted by rankk(X), is the rank of the mode-k unfolding X(k) (note that X(k) is a matrix, so its rank is well-defined). Based on mode-k rank, we define the rank of tensor X as r(X) = (r1, . . . , rk) if the mode-k rank is rk for k = 1, . . . , K. Note that the mode-k rank can be computed in polynomial time, because it boils down to computing a matrix rank, whereas computing tensor rank [14] is NP complete. Similarly, we extend the trace norm (a.k.a. nuclear norm) of matrices [19] to tensors. The overlapped Schatten-1 norm is defined as |||X|||S1 = 1 K PK k=1 ∥X(k)∥S1, where X(k) is the mode-k unfolding of X, and ∥· ∥S1 is the Schatten-1 norm for a matrix, ∥X∥S1 = Pr j=1 σj(X), where σj(X) is the j-th largest singular value of X. The dual norm of the Schatten-1 norm is Schatten-∞norm (a.k.a., spectral norm) as ∥X∥S∞= maxj=1,...,r σj(X). By H¨older’s inequality, we have |⟨W, X⟩| ≤∥W∥S1∥X∥S∞. It is easy to prove a similar result for the overlapped Schatten-1 norm and its dual norm. We have the following H¨older-like inequality [22]: |⟨W, X⟩| ≤|||W|||S1 |||X|||S∗ 1 ≤|||W|||S1 |||X|||mean , (3) where |||X|||mean := 1 K PK k=1 ∥X(k)∥S∞. Moreover, we define ℓ1-norm and ℓ∞-norm for tensors that |||X|||1 = Pn1 i1=1 . . . PnK iK=1 |Xi1,...,iK|, |||X|||∞= max1≤i1≤n1 . . . max1≤iK≤nK |Xi1,...,iK|. By H¨older’s inequality, we have |⟨W, X⟩| ≤ |||W|||1 |||X|||∞, and the following inequality relates the overlapped Schatten-1 norm with the Frobenius norm, |||X|||S1 ≤ K X k=1 √rk |||X|||F . (4) Let W∗∈Rn1×...×nK be the low-rank tensor that we wish to recover. We assume that W∗is of rank (r1, . . . , rK). Thus, for each k, we have W∗ (k) = UkSkV⊤ k , where Uk ∈Rnk×rk and Vk ∈Rrk×nk are orthogonal matrices, which consist of left and right singular vectors of W∗ (k), Sk ∈Rrk×rk is a diagonal matrix whose diagonal elements are singular values. Let ∆∈Rn1×...×nK 3 be an arbitrary tensor, we define the mode-k orthogonal complement ∆′′ k of its mode-k unfolding ∆(k) ∈Rnk× ¯ N\k with respect to the true low-rank tensor W∗as follows ∆′′ k = (Ink −UkU⊤ k )∆(k)(I ¯ N\k −VkV⊤ k ). (5) In addition ∆′ k = ∆(k) −∆′′ k is the component which has overlapped row/column space with the unfolding of the true tensor W∗ (k). Note that the decomposition ∆(k) = ∆′ k + ∆′′ k is defined for each mode. In [18], the concept of decomposibility and a large class of decomposable norms are discussed at length. Of particular relevance to us is the decomposability of the Schatten-1 norm and ℓ1norm. We have the following equality, i.e., mode-k decomposibility of the Schatten-1 norm that ∥W∗ (k) +∆′′ k∥S1 = ∥W∗ (k)∥S1 +∥∆′′ k∥S1, k = 1, . . . , K. To note that the decomposibility is defined on each mode. It is also easy to check the decomposibility of the ℓ1-norm. Let V∗∈Rn1×...×nK be the gross corruption tensor that we wish to recover. We assume the the gross corruption is sparse, in that the cardinality s = |supp(V∗)| of its support, S = supp(V∗) = (i1, i2, . . . , iK) ∈[n1] × . . . × [nK]|V∗ i1,...,iK ̸= 0 . This assumption leads to the inequality between the ℓ1 norm and the Forbenius norm that |||V∗|||1 ≤√s |||V∗|||F . Moreover, we have |||V∗|||1 = |||V∗ S|||1. For any D ∈Rn1×...×nK, we have |||D|||1 = |||DS|||1 + |||DSc|||1 . 4 Main Results To get a deep theoretical insight into the recovery property of robust tensor decomposition, we will now present a set of estimation error bounds. Unlike the analysis in [1], where only the summation of the estimation errors on the low-rank matrix and gross corruption matrix are analyzed, we aim at obtaining the estimation error bounds on each tensor (the low-rank tensor and corrupted tensor) separately. All the proofs can be found in the longer version of this paper. Instead of considering the observation model in 1, we consider the following more general observation model yi = ⟨W∗, Xi⟩+ ⟨V∗, Xi⟩+ ϵi, i = 1, . . . , M, (6) where Xi can be seen as an observation operator, and ϵi’s are i.i.d. zero mean Gaussian noise with variance σ2. Our goal is to estimate an unknown rank (r1, . . . , rk) of tensor W∗∈Rn1×...×nK, as well as the unknown support of tensor V∗, from observations yi, i = 1, . . . , M. We propose the following convex minimization to estimate the unknown low-rank tensor W∗and the sparse corruption tensor V∗simultaneously, with composite regularizers on W and V as follows: (c W, bV) = arg min W,V 1 2M ∥y −X(W + V)∥2 2 + λM |||W|||S1 + µM |||V|||1 , (7) where y = (y1, . . . , yM)⊤is the collection of observations, X(W) is the linear observation model that X(W) = [⟨W, X1⟩, . . . , ⟨W, XM⟩]⊤. Note that (2) is a special case of (7), where the linear operator the identity tensor, we have yi as observation of each element in the summation of tensors W∗+ V∗. We also define y∗= (y∗ 1, . . . , y∗ M)⊤, where y∗ i = ⟨W∗+ V∗, Xi⟩, is the true evaluation. Due to the noise of observation model, we have y = y∗+ ϵ. In addition, we define the adjoint operator of X as X∗: RM →Rn1×...×nK that X∗(ϵ) = PM i=1 ϵiXi. 4.1 Deterministic Bounds This section is devoted to obtain the deterministic bound of the residual low-rank tensor ∆= c W−W∗ and residual corruption tensor D = bV −V∗separately, which makes our analysis unique. We begin with a key technical lemma on residual tensors ∆= c W −W∗and D = bV −V∗, obtained from the convex problem in (7). Lemma 1. Let c W and bV be the solution of minimization problem (7) with λM ≥2 |||X∗(ϵ)|||mean/M, µM ≥2 |||X∗(ϵ)|||∞/M, we have 4 1. rank(∆′ k) ≤2rk. 2. There exist β1 ≥3 and β2 ≥3, such that PK k=1 ∥∆′′ k∥S1 ≤β1 PK k=1 ∥∆′ k∥S1 and |||DSc|||1 ≤β2 |||DS|||1. The lemma can be obtained by utilizing the optimality of c W and bV, as well as the decomposibility of Schatten-1 norm and ℓ1-norm of tensors. Also, we obtain the key property of the optimal solution of (7), presented in the following theorem. Theorem 1. Let c W and bV be the solution of minimization problem (7) with λM ≥ 2 |||X∗(ϵ)|||mean/M, µM ≥2 |||X∗(ϵ)|||∞/M, we have 1 2M ∥X(∆+ D)∥2 2 ≤3λM 2K K X k=1 ∥∆′ k∥S1 + 3µM 2 |||DS|||1 . (8) Theorem 1 provides a deterministic prediction error bound for model (7). This is a very general result, and can be applied to any linear operator X, including the robust tensor decomposition case that we are particularly interested in this paper. It also covers, for example, tensor regression, tensor compressive sensing, to mention a few. Furthermore, we impose an assumption on the linear operator and the residual low-rank tensor and residue sparse corruption tensor, which generalized the restricted eigenvalue assumption [2] [10]. Assumption 1. Defining Ω = {(∆, D)| PK k=1 ∥∆′′ k∥S1 ≤ β1 PK k=1 ∥∆′ k∥S1, |||DSc|||1 ≤ β2 |||DS|||1}, we assume there exist positive scalars κ1, κ2 that κ1 = min ∆,D∈Ω ∥X(∆+ D)∥2 √ M |||∆|||F > 0, κ2 = min ∆,D∈Ω ∥X(∆+ D)∥2 √ M |||D|||F > 0. Note that Assumption 1 is also related to restricted strong convexity assumption, which is proposed in [18] to analyze the statistical properties of general M-estimators in the high dimensional setting. Combing the results in Theorem 1 and Assumption 1, we have the following theorem, which summarizes our main result. Theorem 2. Let c W, bV be an optimal solution of (7), and take the regularization parameters λM ≥ 2 |||X∗(ϵ)|||mean/M, µM ≥2 |||X∗(ϵ)|||∞/M. Then the following results hold: c W −W∗ F ≤3 κ1 1 K K X k=1 λM √2rk κ1 + µM √s κ2 ! , (9) bV −V∗ F ≤3 κ2 1 K K X k=1 λM √2rk κ1 + µM √s κ2 ! . (10) Theorem 2 provides us with the error bounds of each tensor separately. Specifically, these bounds not only measure how well our decomposition model can approximate the observation model defined in (6), but also measure how well the decomposition of the true low-rank tensor and gross corruption tensor is. When s = 0, our theoretical results reduce to that proposed in [22], which is a special case of our problem, i.e., noisy low-rank tensor decomposition without corruption. On the other hand, the results obtained in Theorem 2 are very appealing both practically and theoretically. From the perspective of applications, this result is quite useful as it helps us to better understand the behavior of each tensor separately. From the theoretical point of view, this result is novel, and is incomparable with previous results [1][17] or simple generalization of previous results. Though Theorem 2 has provided estimation error bounds of c W and bV, it is unclear whether the rank of W∗and the support of V∗can be exactly recovered. We show that under some assumptions about the true tensors, both of them can be exactly recovered. Corollary 1. Under the same conditions of Theorem 2, if the following condition holds: σrk(W∗ (k)) > 6(1 + β1) PK k=1 √2rk κ1MK 1 K K X k=1 λM √2rk κ1 + µM √s κ2 ! , (11) 5 where σrk(W∗ (k)) is the rk-th largest singular value of W∗ (k), then brk = arg max r σr(c W(k)) > 3(1 + β1) PK k=1 √2rk κ1MK 1 K K X k=1 λM √2rk κ1 + µM √s κ2 ! recovers the rank of W∗ (k) for all k. Furthermore, if the following condition holds: min i1,...,iK |V∗ i1,...,iK| > 6(1 + β2)√s κ2M 1 K K X k=1 λM √2rk κ1 + µM √s κ2 ! , (12) then bS = (i1, i2, . . . , iK) : bVi1,...,iK > 3(1 + β2)√s κ2M 1 K K X k=1 λM √2rk κ1 + µM √s κ2 ! recovers the true support of V∗. Corollary 1, basically states that, under the assumption that the singular values of the low-rank tensor W∗, and the entry values of corruption tensor V∗are above the noise level (e.g., (11) and (12)), we can recover the rank and the support successfully. 4.2 Noisy Tensor Decomposition Now we are going back to study robust tensor decomposition with corruption in (2), which is a special case of (7), where the linear operator is identity tensor. As the linear operator X is a vectorization such that M = N, and ∥X(∆+ D)∥2 = |||∆+ D|||F . In addition, it is easy to show that Assumption 1 holds with κ1 = κ2 = O(1/ √ N). It remains to bound |||X∗(ϵ)|||mean and |||X∗(ϵ)|||∞, as shown in the following lemma [1] [24]. Lemma 2. Suppose that X : Rn1×···×nK →RN is a vectorization of a tensor. Then we have with probability at least 1 −2 exp(−C(nk + ¯N\k)) −1/N that |||X∗(ϵ)|||mean ≤σ K K X k=1 √nk + q ¯N\k , |||X∗(ϵ)|||∞≤4σ p log N, where C is a universal constant. With Theorem 2 and Lemma 2, we immediately have the following estimation error bounds for robust tensor decomposition. Theorem 3. Suppose that X : Rn1×···×nK →RN is a vectorization of a tensor. Then for the regularization constants λN ≥2σ PK k=1 √nk + q ¯N\k /(NK), µN > 8σ√log N/N, with probability at least 1 −2 exp(−C(nk + ¯N\k)) −1/N, any solution of (2) have the following error bound: c W −W∗ F ≤6 κ1 1 K K X k=1 σ PK k=1 √nk + q ¯N\k √2rk κ1NK + 4σ√s log N κ2N ! , bV −V∗ F ≤6 κ2 1 K K X k=1 σ PK k=1 √nk + q ¯N\k √2rk κ1NK + 4σ√s log N κ2N ! . In the special case that n1 = . . . = nK = n and r1 = . . . = rK = r, we have c W −W∗ F = O σ √ rnK−1 + σ√Ks log n and bV −V∗ F = O σ √ rnK−1 + σ√Ks log n , which matches the error bound of robust matrix decomposition [1] when K = 2. Note that the high probability support and rank recovery guarantee for the special case of tensor decomposition follows immediately from Corollary 1. Due to the space limit, we omit the result here. 6 5 Algorithm In this section, we present an algorithm to solve (2). Since (2) is a special case of (7), we consider the more general problem (7). It is easy to show that (7) is equivalent to the following problem with auxiliary variables Ψ, Φ: min W,V,Y,Z 1 2M ∥y −x⊤(w + v)∥2 2 + λM K K X k=1 |||Ψk|||S1 + µM K K X k=1 |||Φk|||1 , subject to Pkw = ψk, Pkv = φk, where x, w, v, ψk, φk are the vectorizations of PM i=1 Xi, W, V, Ψk, Φk respectively, and Pk is the transformation matrix that change the order of rows and columns so that Pkw = ψk. The augmented Lagrangian (AL) function of the above minimization problem with respect to the primal variables (Wt, Vt) is given as follows: Lη(W, V, {Ψk}K k=1, {Φk}K k=1, {αk}K k=1, {βk}K k=1) =1 2∥y −x⊤(w + v)∥2 2 + λMM K K X k=1 |||Ψk|||S1 + µMM K K X k=1 |||Φk|||1 +η X k (α⊤ k (Pkw −ψk) + 1 2∥Pkw −ψk∥2 2) + X k (β⊤ k (Pkv −φk) + 1 2∥Pkv −φk∥2 2) ! , where αt, βt are Lagrangian multiplier vectors, and η > 0 is a penalty parameter. We then apply the algorithm of Alternating Direction Method of Multipliers (ADMM) [3, 20] to solve the above optimization problem. Starting from initial points (w0, v0, {Ψ0 k}K k=1, {Φ0 k}K k=1, {α0 k}K k=1, {β0 k}K k=1), ADMM performs the following updates iteratively: wt+1 = (x⊤y −x⊤xvt) + η K X k=1 P⊤ k (ψt k −αt k) ! / (1 + ηK) , vt+1 = (x⊤y −x⊤xwt+1) + η K X k=1 P⊤ k (φt k −βt k) ! / (1 + ηK) , Ψt+1 k = proxtr λM ηK (Pkwt+1 + αt k), Φt+1 k = proxℓ1 µM ηK (Pkvt+1 + βt k) k = 1, . . . , K, αt+1 k = αt+1 k + (Pkwt+1 −ψt+1 k ) βt+1 k = βt+1 k + (Pkvt+1 −φt+1 k ) k = 1, . . . , K, where proxtr γ (·) is the soft-thresholding operator for trace norm, and proxℓ1 γ (·) is the soft-thresholding operator for ℓ1 norm [4, 11]. The stopping criterion is that all the partial (sub)gradients are (near) zero, under which condition we obtain the saddle point of the augmented Lagrangian function. Since (7) is strictly convex, the saddle point is the global optima for the primal problem. 6 Experiments In this section, we conduct numerical experiments to confirm our analysis in previous sections. The experiments are conducted under the setting of robust noisy tensor decomposition. We follow the procedure described in [22] for the experimental part. We randomly generate low-rank tensors of dimensions n(1) = (50, 50, 20) ( results are shown in Figure 1(a, b, c)) and n(2) = (100, 100, 50)( results are shown in Figure 1(d, e, f)) for various rank (r1, r2, ..., rk). Given a specific rank, we first generated the ”core tensor” with elements r1 × . . . × rK from the standard normal distribution, and then multiplied each mode of the core tensor with an orthonormal factor randomly drawn from the Haar measure. For the gross corruption, we randomly generated the sparsity of the corruption matrix s, and then randomly selected s elements in which we put values randomly generated from uniform distribution. The additive independent Gaussian noise with variance σ2 7 10 15 20 25 30 35 40 0 1 2 3 4 5 6 x 10 −4 Ns mean error of low−rank tensor Nr = 2.9 Nr = 4.0 Nr = 5.4 (a) |||∆|||F M against Ns of size n(1). 3 3.5 4 4.5 5 5.5 0 1 2 3 4 5 6 x 10 −4 Nr mean error of low−rank tensor Ns = 17 Ns = 25 Ns = 35 (b) |||∆|||F M against Nr of size n(1). 0 1 2 3 4 5 6 7 x 10 −6 0 1 2 3 4 5 6 7 8 x 10 −6 κ1 κ2 (c) κ1 against κ2 of size n(1). 10 15 20 25 30 35 4 5 6 7 8 9 10 11 12 x 10 −5 Ns mean error of low−rank tensor Nr = 2.9 Nr = 4.0 Nr = 4.9 (d) |||∆|||F M against Ns of size n(2). 1 1.5 2 2.5 3 3.5 4 2 3 4 5 6 7 8 9 x 10 −5 Nr mean error of low−rank tensor Ns = 15.8 Ns = 22.4 Ns = 31.6 (e) |||∆|||F M against Nr of size n(2). 0.5 1 1.5 2 2.5 3 3.5 4 x 10 −6 0.5 1 1.5 2 2.5 3 3.5 4 x 10 −6 κ1 κ2 (f) κ1 against κ2 of size n(2). Figure 1: Results of robust noisy tensor decomposition with corruption, under different sizes. was added to the observations of elements. We use the alternating direction method of multipliers (ADMM) to solve the minimization problem (2). The whole experiments were repeated 50 times and the averaged results are reported. The results are shown in Figure 1, where Nr = PK k=1 √rk/K, and Ns = √s. In Figure 1(a, d), we first fix Nr at different values, and then draw the value of c W −W∗ F /N against Ns. Similarly, in Figure 1(b, e), we first fix Ns at different values, and then draw c W −W∗ F /N against Nr. In Figure 1(c, f), we study the values of κ1 and κ2 at various settings. We can see that c W −W∗ F /N scales linearly with both Ns and Nr. Similar scalings of bV −V∗ F /N can be observed, hence we omit them due to space limitation. We can also observe from Figure 1(c, f) that, under various settings, κ1 ≈κ2, this finding is consistent with the fact that c W −W∗ F /N ≈ bV −V∗ F /N. All these results are consistent with each other, validating our theoretical analysis. 7 Conclusions In this paper, we analyzed the statistical performance of robust noisy tensor decomposition with corruption. Our goal is to recover a pair of tensors, based on observing a noisy contaminated version of their sum. It is based on solving a convex optimization with composite regularizations of Schatten-1 norm and ℓ1 norm defined on tensors. We provided a general nonasymptotic estimator error bounds on the underly low-rank tensor and sparse corruption tensor. Furthermore, the error bound we obtained in this paper is new, and non-comparable with previous theoretical analysis. Acknowledgement We would like to thank the anonymous reviewers for their helpful comments. Research was sponsored in part by the Army Research Lab, under Cooperative Agreement No. W911NF-09-2-0053 (NSCTA), the Army Research Office under Cooperative Agreement No. W911NF-13-1-0193, National Science Foundation IIS-1017362, IIS-1320617, and IIS-1354329, HDTRA1-10-1-0120, and MIAS, a DHSIDS Center for Multimodal Information Access and Synthesis at UIUC. 8 References [1] A. Agarwal, S. Negahban, and M. J. Wainwright. Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions. The Annals of Statistics, 40(2):1171–1197, 04 2012. [2] P. J. Bickel, Y. Ritov, and A. B. Tsybakov. Simultaneous analysis of lasso and dantzig selector. The Annals of Statistics, pages 1705–1732, 2009. [3] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends R⃝in Machine Learning, 3(1):1–122, 2011. [4] J.-F. Cai, E. J. Cand`es, and Z. Shen. A singular value thresholding algorithm for matrix completion. SIAM Journal on Optimization, 20(4):1956–1982, 2010. [5] E. J. Cand`es, X. Li, Y. Ma, and J. Wright. Robust principal component analysis? J. ACM, 58(3):11, 2011. [6] E. J. Cand`es and Y. Plan. Matrix completion with noise. Proceedings of the IEEE, 98(6):925–936, 2010. [7] E. J. Cand`es and B. Recht. Exact matrix completion via convex optimization. Commun. ACM, 55(6):111– 119, 2012. [8] E. J. Cand`es and T. Tao. The power of convex relaxation: near-optimal matrix completion. IEEE Transactions on Information Theory, 56(5):2053–2080, 2010. [9] V. Chandrasekaran, S. Sanghavi, P. A. Parrilo, and A. S. Willsky. Rank-sparsity incoherence for matrix decomposition. SIAM Journal on Optimization, 21(2):572–596, 2011. [10] P. Gong, J. Ye, and C. Zhang. Robust multi-task feature learning. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 895–903. ACM, 2012. [11] E. T. Hale, W. Yin, and Y. Zhang. Fixed-point continuation for \ell 1-minimization: Methodology and convergence. SIAM Journal on Optimization, 19(3):1107–1130, 2008. [12] D. Hsu, S. M. Kakade, and T. Zhang. Robust matrix decomposition with sparse corruptions. IEEE Transactions on Information Theory, 57(11):7221–7234, 2011. [13] T. G. Kolda and B. W. Bader. Tensor decompositions and applications. SIAM Review, 51(3):455–500, 2009. [14] L. D. Lathauwer, B. D. Moor, and J. Vandewalle. On the best rank-1 and rank-(r1,r2,. . .,rn) approximation of higher-order tensors. SIAM J. Matrix Anal. Appl., 21(4):1324–1342, 2000. [15] J. Liu, P. Musialski, P. Wonka, and J. Ye. Tensor completion for estimating missing values in visual data. IEEE Trans. Pattern Anal. Mach. Intell., 35(1):208–220, 2013. [16] C. Mu, B. Huang, J. Wright, and D. Goldfarb. Square deal: Lower bounds and improved relaxations for tensor recovery. CoRR, abs/1307.5870, 2013. [17] S. Negahban and M. J. Wainwright. Estimation of (near) low-rank matrices with noise and high-dimensional scaling. The Annals of Statistics, 39(2):1069–1097, 04 2011. [18] S. N. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for high-dimensional analysis of m-estimators with decomposable regularizers. Statistical Science, 27(4):538–557, 11 2012. [19] N. Srebro and A. Shraibman. Rank, trace-norm and max-norm. In COLT, pages 545–560, 2005. [20] R. Tomioka, K. Hayashi, and H. Kashima. Estimation of low-rank tensors via convex optimization. 2010. [21] R. Tomioka and T. Suzuki. Convex tensor decomposition via structured schatten norm regularization. In NIPS, pages 1331–1339, 2013. [22] R. Tomioka, T. Suzuki, K. Hayashi, and H. Kashima. Statistical performance of convex tensor decomposition. In NIPS, pages 972–980, 2011. [23] M. A. O. Vasilescu and D. Terzopoulos. Multilinear analysis of image ensembles: Tensorfaces. In ECCV (1), pages 447–460, 2002. [24] R. Vershynin. Introduction to the non-asymptotic analysis of random matrices. arXiv preprint arXiv:1011.3027, 2010. [25] E. Yang and P. D. Ravikumar. Dirty statistical models. In NIPS, pages 611–619, 2013. 9
|
2014
|
175
|
5,264
|
RAAM: The Benefits of Robustness in Approximating Aggregated MDPs in Reinforcement Learning Marek Petrik IBM T. J. Watson Research Center Yorktown Heights, NY 10598 mpetrik@us.ibm.com Dharmashankar Subramanian IBM T. J. Watson Research Center Yorktown Heights, NY 10598 dharmash@us.ibm.com Abstract We describe how to use robust Markov decision processes for value function approximation with state aggregation. The robustness serves to reduce the sensitivity to the approximation error of sub-optimal policies in comparison to classical methods such as fitted value iteration. This results in reducing the bounds on the γ-discounted infinite horizon performance loss by a factor of 1/(1 −γ) while preserving polynomial-time computational complexity. Our experimental results show that using the robust representation can significantly improve the solution quality with minimal additional computational cost. 1 Introduction State aggregation is one of the simplest approximate methods for reinforcement learning with very large state spaces; it is a special case of linear value function approximation with binary features. The main advantages of using aggregation in comparison with other value function approximation methods are its simplicity, flexibility, and the ease of interpretability (Bean et al., 1987; Bertsekas and Castanon, 1989; Van Roy, 2005). Informally, value function approximation methods compute an approximately-optimal policy ˜π by computing an approximate value function ˜v as an intermediate step. The quality of the solution can be measured by its performance loss: ρ(π⋆) −ρ(˜π) where π⋆is the optimal policy, and ρ(·) is the γ-discounted infinite-horizon return of the policy, averaged over (any) given initial state distribution. The tight upper bound guarantees on the performance loss— tighter for state-aggregation than for general linear value function approximation—are (Van Roy, 2005), ρ(π⋆) −ρ(˜π) ≤4 γ ϵ(v⋆)/(1 −γ)2 (1.1) where ϵ(v⋆)—defined formally in Section 4—is the smallest approximation error for the optimal value function v⋆. It is important that the error is with respect to the optimal value function which can have special structural properties, such as convexity in inventory management problems (Porteus, 2002). Because the bound in (1.1) is tight, the performance loss may grow with the discount factor as fast as γ/(1−γ)2 while the total return for any policy only grows as 1/(1−γ). Therefore, for γ sufficiently close to 1, the policy ˜π computed through state aggregation may be no better than a random policy even when the approximation error of the optimal policy is small. This large performance loss is caused by large errors in approximating sub-optimal value functions (Van Roy, 2005). In this paper, we show that it is possible to guarantee much smaller performance loss by using a robust model of the approximation errors through a new algorithm we call RAAM (robust approximation for aggregated MDPs). Informally, we use robustness to reduce the approximated return of policies with large approximation errors to make it less likely that such policies will be selected. 1 The performance loss of the RAAM can be bounded as: ρ(π⋆) −ρ(˜π) ≤2 ϵ(v⋆)/(1 −γ) . (1.2) As the main contribution of the paper—described in Section 3—we incorporate the desired robustness into the aggregation model by assuming bounded worst-case state importance weights. The state importance weights determine the relative importance of the approximation errors among the states. RAAM formulates the robust optimization over the importance weights as a robust Markov decision process (RMDP). RMDPs extend MDPs to allow uncertain transition probabilities and rewards and preserve most of the favorable MDP properties (Iyengar, 2005; Nilim and Ghaoui, 2005; Le Tallec, 2007; Wiesemann et al., 2013). RMDPs can be solved in polynomial time and the solution methods are practical (Kaufman and Schaefer, 2013; Hansen et al., 2013). To minimize the overhead of RAAM in comparison with standard aggregation, we describe a new linear-time algorithm for the Bellman update in Section 3.1 for RMDPs with robust sets constrained by the L1 norm. Another contribution of this paper—described in Section 4—is the analysis of RAAM performance loss and the impact of the choice of robust uncertainty sets. We focus on constructing aggregate RMPDs with rectangular uncertainty sets (Iyengar, 2005; Wiesemann et al., 2013) and show that it is possible to use MDP structural properties to reduce RAAM performance loss guarantees compared to (1.2). The experimental results in Section 5 empirically illustrate settings in which RAAM outperforms standard state aggregation methods. In particular, RAAM is more robust to sub-optimal policies with a large approximation error. The results also show that the computational overhead of using the robust formulation is very small. 2 Preliminaries In this section, we briefly overview robust Markov decision processes (RMDPs). RMDPs generalize MDPs to allow for uncertain transition probabilities and rewards. Our definition of RMDPs is inspired by stochastic zero-sum games to generalize previous results to allow for uncertainty in both the rewards and transition probabilities (Filar and Vrieze, 1997; Iyengar, 2005). Formally, an RMDP is a tuple (S, A, B, P, r, α), where S is a finite set of states, α ∈△S is the initial distribution, As is a set of actions that can be taken in state s ∈S, and Bs is a set of robust outcomes for s ∈S that represent the uncertainty in transitions and rewards. From a game-theoretic perspective, Bs can be seen as the actions of the adversary. For any a ∈As, b ∈Bs, the transition probabilities are Pa,b : S →△S and the reward is ra,b : S →R. The rewards depend only on the starting state and are independent of the target state1. The basic solution concepts of RMDPs are very similar to regular MDPs with the exception that the solution also includes the policy of the adversary. We consider the set of randomized stationary policies ΠR = {πs ∈△As}s∈S as candidate solutions and use ΠD for deterministic policies. Two main practical models of the uncertainty in Bs have been considered: s-rectangular and s, arectangular sets (Le Tallec, 2007; Wiesemann et al., 2013). In s-rectangular uncertainty models, the realization of the uncertainty depends only on the state and is independent on the action; the corresponding set of nature’s policies is: ΞS = {ξs ∈△Bs}s∈S. In s, a-rectangular models, the realization of the uncertainty can also depend on the action: ΞSA = {ξs,a ∈△Bs}s∈S,a∈As. We will also consider restricted sets on the adversary’s policies: ΞQ S = {ξs ∈Qs}s∈S and ΞQ SA = {ξs,a ∈Qs}s,a∈S×As, for some Qs ⊂△Bs. Next, we briefly overview the basic properties of robust MDPs; please refer to (Iyengar, 2005; Nilim and Ghaoui, 2005; Le Tallec, 2007; Wiesemann et al., 2013) for more details. The transitions and rewards for any stationary policies π and ξ are defined as: Pπ,ξ(s, s′) = X a,b∈As×Bs Pa,b(s, s′) πs,a ξs,b , rπ,ξ(s) = X a,b∈As×Bs ra,b(s) πs,a ξs,b . 1Rewards that depend on the target state can be readily reduced to independent ones by taking the appropriate expectation. 2 It will be convenient to use Pπ,ξ to denote the transition matrix and rπ,ξ and α as vectors over states. We will also use I to denote an identity matrix and 1, 0 to denote vectors of ones and zeros respectively with appropriate dimensions. Using this notation, with a s, a-rectangular model, the objective in the RMDP is to maximize the γ-discounted infinite horizon robust return ρ as: ρ−= sup π∈ΠR ρ−(π) = sup π∈ΠR inf ξ∈ΞSA ρ(π, ξ) = sup π∈ΠR inf ξ∈ΞSA ∞ X t=0 αT(γ Pπ,ξ)t rπ,ξ . (RBST) The negative superscript denotes the fact that this is the robust return. The value function for a policy pair π and ξ is denoted by v− π,ξ and the optimal robust value function is v− ⋆. Similarly to regular MDPs, the optimal robust value function must satisfy the robust Bellman optimality equation: v− ⋆(s) = max π∈ΠR min ξ∈ΞQ SA X a,b∈As×Bs πs,a ξs,a,b ra,b(s) + γ X s′∈S Pa,b(s, s′) v− ⋆(s′) . (2.1) 3 RAAM: Robust Approximation for Aggregated MDPs This section describes how RAAM uses transition samples to compute an approximately optimal policy. We also describe a linear-time algorithm for computing value function updates for the robust MDPs constructed by RAAM. Algorithm 1: RAAM: Robust Approximation for Aggregated MDPs // Σ - samples, w - weights, θ - aggregation, ω - robustness Input: Σ, w, θ, ω Output: ¯π – approximately optimal policy // Compute RMDP parameters 1 S ←{θ(¯s) : (¯s, ¯s′, ¯a, r) ∈Σ} ∪{θ(¯s′) : (¯s, ¯s′, ¯a, ¯r) ∈Σ} ; // States 2 forall the s ∈S do 3 As ←{¯a : (¯s, ¯s′, ¯a, r) ∈Σ, s = θ(¯s)} ; // Actions 4 Bs ←{¯s : (¯s, ¯s′, ¯a, r) ∈Σ, s = θ(¯s)} ; // Outcomes 5 end // Compute RMDP transition probabilities and rewards 6 forall the s, s′ ∈S × S do 7 forall the a, b ∈As × Bs do 8 Σ′ ←{(¯s′, ¯r) : (¯s, ¯s′, ¯a, ¯r) ∈Σ, θ(¯s) = s, a = ¯a, b = ¯s} ; 9 Pa,b(s, s′) ← 1 |Σ′| P ¯s′,·∈Σ′ 1s′=θ(¯s′) ; 10 ra,b(s) ←P ·,¯r∈Σ′ ¯r/|Σ′| ; 11 end 12 end // Construct robust sets based on state weights and L1 bounds 13 Qs ←{ξ ∈△Bs : ∥ξ − ws 1Tw|Bs ∥1 ≤ω}; 14 ΞQ SA ←{ξs,a ∈Qs}s,a∈S×As; // Solve RMDP 15 Solve (2.1) to get π⋆—the optimal RMDP policy—and let ¯π¯s,a = π⋆ θ(¯s),a ; 16 return ¯π ; Algorithm 1 depicts a simplified implementation of RAAM. In general, we use ¯s to distinguish the un-aggregated MDP states from the states in the aggregated RMDP. The main input to the algorithm consists of transition samples Σ = {(¯si, ¯s′ i, ¯ai, ri)}i∈I which represent transitions from a state ¯si to the state ¯s′ i given reward ri and taking an action ai; the transitions need to be sampled according to the transition probabilities conditioned on the state and an action. The aggregation function θ : ¯S →S, which maps every MDP state from ¯S to an aggregate RMDP state, is also assumed to be given. Finally, the state weights w ∈△S and the robustness ω are tunable parameters. We use the L1 norm to bound the uncertainty. The representation uses ω to continuously trade off between fixed importance weights for ω = 0 and complete robustness ω = 2. We analyze 3 s2 s1 ¯s1 ¯s2 ¯s3 a1 a2 0 1 ϵ 0 Figure 1: An example MDP. s1 a1 a2 s2 ¯s1 ¯s2 ¯s1 ¯s2 0 1 0 ϵ 0 Figure 2: Aggregated RMDP. the effect of this parameter in Section 4. However, simply setting w to be uniform and ω = 2 will provide sufficiently strong theoretical guarantees and generally works well in practice. Finally, we assume s, a-rectangular uncertainty sets for the sake of reducing the computational complexity; better approximations could be obtained by using s-rectangular sets, but this makes no difference for deterministic policies. Next, we show an example that demonstrates how the robust MDP is constructed from the aggregation. We will also use this example to show the tightness of our bounds on the performance loss. Example 3.1. The original MDP problem is shown in Fig. 1. The round white nodes represent the states, while the black nodes represent state-action pairs. All transitions are deterministic, with the number next to the transition representing the corresponding reward. Two shaded regions marked with s1 and s2 denote the aggregate states. Fig. 2 depicts the corresponding aggregated robust MDP constructed by RAAM. The rectangular nodes in this picture represent the robust outcome. 3.1 Reducing Computational Complexity Solving an RMDP is in general more difficult than solving a regular MDP. Most RMDP algorithms are based on value or policy iteration, but in general involve repeated solutions of linear or convex programs (Kaufman and Schaefer, 2013). Even though the worst-case time complexity of these algorithms is polynomial, they may be impractical because they require repeatedly solving (2.1) for every state, action, and iteration. Each of these computations may require solving a linear program. The optimization over ΞSA when computing the value function update for solving Line 15 of Algorithm 1 requires solving the following linear program for each s and a. min ξs,a∈△Bs ξT s,azs = X b∈Bs ξs,a,b ra,b(s) + γ X s′∈S Pa,b(s, s′) v(s′) s.t. ∥ξs,a −qs∥1 ≤ω . (3.1) Here qs = ws/1Tw(Bs). While this problem can be solved directly using a linear program solver, we describe a significantly more efficient method in Algorithm 2. Theorem 3.2. Algorithm 2 correctly solves (3.1) in O(|Bs|) time when the full sort is replaced by a quickselect quantile selection algorithm in Line 4. The proof is technical and is deferred to Appendix B.1. The main idea is to dualize the norm constraint and examine the structure of the optimal solution as a function of the dual variable. 4 Performance Loss Bounds This section describes new bounds on the performance loss which is the difference between the return of the optimal and approximate policy. The performance loss is a more reliable measure of the error than the error in the value function (Van Roy, 2005). We also analyze the effect of the state weights w and the robustness parameter ω on performance loss. It will be convenient, for the purpose of deriving the error bounds, to treat aggregation as a linear value function approximation (Van Roy, 2005). For that purpose, define a matrix Φ(¯s, s) = 1s=θ(¯s) 4 Algorithm 2: Solve (3.1) in Line 15 of Algorithm 1 Input: zs, qs – sorted such that zs is non-decreasing, indexed as 1 . . . n Output: ξ⋆ s,a – optimal solution of (3.1) 1 o ←copy(qs) ; i ←n ; 2 ϵ ←min{1 −q1, ω 2 } ; 3 o1 ←ϵ + q1; 4 while ϵ > 0 ; // Determine the threshold 5 do 6 oi ←oi −min{ϵ, oi} ; 7 ϵ ←ϵ −min{ϵ, oi} ; 8 i ←i −1; 9 end 10 return o ; where s ∈S, ¯s ∈¯S, and 1 represents the indicator function. That is, each column corresponds to a single aggregate state with each row entry being either 1 or 0 depending on whether the original state belongs into the aggregate state. In order to simplify the derivation of the bounds, we start by assuming that the RMDP in RAAM is constructed from the full sample of the original MDP; we discuss finite-sample bounds later. Therefore, assume that the full regular MDP is M = ( ¯S, ¯ A, ¯P, ¯r, ¯α); we are using bars in general to denote MDP values, but assume that A = ¯ A. We also use ¯ρ to denote the return of a policy in the MDP. The robust outcomes correspond to the original states that compose any s: Bs = θ−1(s). The RMDP transitions and rewards for some π and ξ are computed as: Pπ,ξ = ΦT diag ¯ξ ¯Pπ Φ rπ,ξ = ΦT diag ¯ξ ¯rπ αT = ¯αT Φ. (4.1) Here, ¯ξ¯s = P a∈A¯s πs,a ξs,a,¯s such that θ(¯s) = s are state weights induced by ξ. There are two types of optimal policies: ¯π⋆and π⋆; ¯π⋆is the truly optimal policy, while π⋆is the optimal policy given aggregation constraints requiring the same action for all aggregated states. For any computed policy ˜π, we focus primarily on the performance loss ¯ρ(π⋆)−¯ρ(˜π). The total loss can be easily decomposed as ¯ρ(¯π⋆)−¯ρ(˜π) = ¯ρ(¯π⋆)−¯ρ(π⋆) + ¯ρ(π⋆)−¯ρ(˜π) . The error ρ(¯π⋆)−¯ρ(π⋆) is independent of how the value of the aggregation is computed. The following theorem states the main result of the paper. A part of the results uses the concentration coefficient C for a given distribution µ of the MDP (Munos, 2005) which are defined as: ¯Pa(s, s′) ≤ Cµ(s′) for all s, s′ ∈¯S, a ∈¯ A. Theorem 4.1. Let ˜π be the solution of Algorithm 1 based on the full sample for ω = 2. Then: ¯ρ(π⋆) −¯ρ(˜π) ≤2 ϵ(v⋆) 1 −γ , where ϵ(v⋆) = minv∈RS ∥v⋆−Φv∥∞and this bound is tight. In addition, when the concentration coefficient of the original MDP is C with distribution µ, then ϵ(v⋆) = minv∈RS ∥e(v)∥1,σ where σ = ΦT (γ α + (1 −γ) µ) and e(v)s = max¯s∈θ−1(s) |(I −γ ¯Pπ⋆)(¯v⋆−Φ v)|¯s. Before proving Theorem 4.1, it is instrumental to compare it with the performance loss of related reinforcement learning algorithms. When the aggregation is constructed using constant and uniform aggregation weights (as when Algorithm 1 is used with ω = 0), the performance loss of the computed policy ˜π is bounded as (Tsitsiklis and Van Roy, 1996; Gordon, 1995): ¯ρ(π⋆) −¯ρ(˜π) ≤4 γ ϵ(v⋆) (1 −γ)2 . This bound holds specifically for aggregation (and approximators that are averagers) and is tight; the performance loss for more general algorithms can be even larger. Note that the difference in the 1/(1 −γ) factor is very significant when γ →1. Van Roy (2005) shows similar bounds as RAAM, but they are weaker and require the invariant distribution ψ. In addition, similar performance loss bounds as Theorem 4.1 can be guaranteed by DRADP, but this approach results in general to NPhard computational problems (Petrik, 2012). In fact, the robust aggregation can be seen as a special case of DRADP with rectangular uncertainty sets (Iyengar, 2005). 5 To prove Theorem 4.1 we need the following result showing that for properly chosen robust uncertainty sets, the robust return is a lower bound on the true return. We will use ¯dπ to represent the normalized occupancy frequency for the MDP M and policy π. Lemma 4.2. Assume the uncertainty set to be ΞQ S or ΞQ SA as constructed in (4.1). Then ρ−(π) ≤ ¯ρ(π) as long as for each π ∈Π we have that ¯dπ|Bs ∈ψs · Qs for each s ∈S and some ψs. When ω = 2, the inequality in the theorem also holds for value functions as Proposition B.1 in the appendix shows. Proof. We prove the result for s-rectangular uncertainty sets; the proof for s, a-rectangular sets is analogous. When the policy π is fixed, solving for the nature’s policy represents a minimization MDP with continuous action constraints that has the following dual linear program formulation (Marecki et al., 2013): ρ−(π) = min d∈{RBs }s∈S dT ¯rπ / (1 −γ) s.t. ΦT (I −γ ¯P T π ) d = (1 −γ) ΦT ¯α ds,b / X b′∈Bs ds,b′ ∈Qs, ∀s ∈S, ∀b ∈Bs . (4.2) Note that the left-hand side of the last constraint corresponds to ξa,b. Now, setting d = ¯dπ shows the desired inequality for π; this value is feasible in (4.2) from (B.3) and the objective value is correct from (B.4). The normalization constant is ψs = P b′∈Bs ds,b′. Proof of Theorem 4.1. Using Lemma 4.2, the performance loss for ω = 2 can be bounded as: 0 ≤¯ρ(π⋆) −¯ρ(˜π) ≤¯ρ(π⋆) −ρ−(˜π) = min π∈Π(¯ρ(π⋆) −¯ρ−(π)) ≤¯ρ(π⋆) −ρ−(π⋆) For a policy π, solving ρ−(π) corresponds to an MDP with the following LP formulation: ¯ρ(π⋆) −ρ−(π⋆) ≤min v {αT(v⋆−Φv) : Φv ≤γ ¯Pπ⋆Φv + rπ⋆} . (4.3) Now, let the minimum ϵ = minv ∥v⋆−Φv∥∞be attained at v0. Then, to show that v1 = v0−1+γ 1−γ ϵ 1 is feasible in (4.3), for any k: −ϵ 1 ≤v⋆−Φv0 ≤ϵ 1 (k −1)ϵ 1 ≤v⋆−Φv0 + kϵ 1 ≤(1 + k)ϵ 1 (4.4) (k −1)γϵ 1 ≤γ ¯Pπ⋆(v⋆−Φv0 + kϵ 1) ≤(1 + k)γϵ 1 (4.5) The derivation above uses the monotonicity of ¯Pπ⋆in (4.5). Then, after multiplying by (I −γ ¯Pπ⋆), which is monotone, and rearranging the terms: (I −γ ¯Pπ⋆)Φ(v0 −kϵ 1) ≤(1 + γ −(1 −γ)k)ϵ 1 + rπ⋆, where (I −γ ¯Pπ⋆)v⋆= rπ⋆. Letting k = (1 + γ)/(1 −γ) proves the needed feasibility and (4.4) establishes the bound. The tightness of the bound follows from Example 3.1 with ϵ →0. The bound on the second inequality follows from bounding the dual gap between the primal feasible solution v1 and an upper bound on a dual optimal solution. To upper-bound the dual solution, define a concentration coefficient for an RMDP similarly to an MDP: ¯Pa,b(s, s′) ≤Cµ(s′) for all s, s′ ∈S, a ∈As, b ∈Bs. By algebraic manipulation, if the original MDP has a concentration coefficient C with a distribution µ then the aggregated RMDP has the same concentration coefficient with a distribution ΦTµ. Then, using Lemma 4.3 in (Petrik, 2012), the occupancy frequency (and therefore the dual value) of the RMDP for any policy is bounded as u ≤ C 1−γ Φ((1 −γ) ΦT α + γΦTµ). The linear program (4.3) can be formulated as the following penalized optimization problem: max u min v αT(v⋆−Φv) + uT (I −γ ¯Pπ⋆)Φv −rπ⋆ + , Note that: αT(v⋆−Φv) = αT I −γ ¯Pπ⋆−1 (I −γ ¯Pπ⋆)(v⋆−Φv) = ¯dT π⋆(I −γ ¯Pπ⋆)(v⋆−Φv) . 6 The penalized optimization problem can be rewritten, using the fact that rπ⋆= (I −γ ¯Pπ⋆) v⋆and the feasibility of v1 as: max u 2 1 −γ uT |(I −γ ¯Pπ⋆)(Φ v1 −v⋆)| s.t. u ≤ C 1 −γ Φ ((1 −γ) ΦTα + γ ΦTµ) The theorem then follows by simple algebraic manipulation from the upper bound on u. 4.1 State Importance Weights In this section, we discuss how to select the state importance weights w and the robustness parameter ω. Note that Lemma 4.2 shows that any choice of w and ω such that the normalized occupancy frequency is within ω of w in terms of the L1 norm, provides the theoretical guarantees of Theorem 4.1. Smaller uncertainty sets under this condition only improve the guarantees. In practice, the values w and ω can be treated as regularization parameters. We show sufficient conditions under which the right choice of w and ω can significantly reduce the performance loss, but these conditions have a more explanatory than predictive character. As it can be seen easily from the proof of Lemma 4.2 and Appendix B.2, the optimal choice for the RAAM weights w to approximate the return of a policy π is to use its state occupancy frequency. While the occupancy frequency is rarely known, there exist structural properties, such as the concentration coefficient (Munos, 2005), that can lead to upper bounds on the possible occupancy frequencies. However, the following example shows that simply using an upper bound on the occupancy frequency is not sufficient to reduce the performance loss. Example 4.3. Consider an MDP with 4 states: s1, . . . , s4 and the aggregation with two states that correspond to {s1, s2} and {s3, s4}. Let the set of admissible occupancy frequencies be: Q = {d ∈ △4 : 1/4 ≤d(s1) + d(s4) ≤1/2, d ≥1/8}. The set of uncertainties for this bounded set is for i = 1, 3, and j = 2, 4 as follows: ΞQ S = {d ∈R4 + : 1/6 ≤d(si) ≤4/5, 1/5 ≤d(sj) ≤ 5/6, d(si) + d(sj) = 1}, which is smaller than ΞS. However, Q without the constraint d ≥1/8 results in ΞQ S = ΞS. As Example 4.3 demonstrates, the concentration coefficient alone does not guarantee an improvement in the policy loss. One possible additional structural assumption is that the occupancy frequencies for the individual states in each aggregate state to be “correlated” across policies. More formally, the aggregation correlation coefficient D ∈R+ must satisfy: λ σ(¯s) ≤dπ(¯s) ≤λ D σ(¯s) , (4.6) for some λ ≥0, each ¯s ∈¯S, and σ as defined in Theorem 4.1. Using this assumption, we can derive the following theorem. Consider the uncertainty set Qs = {q : q ≤C (σ|Bs)/(1Tσ(Bs))} then we can show the following theorem. Theorem 4.4. Given an MDP with a concentration coefficient C for µ and a correlation coefficient D, then for uncertainty set ΞQ S and for σ = ΦT (γ α + (1 −γ) µ) we have: ¯ρ(π⋆) −¯ρ(˜π) ≤2 C D 1 −γ min v∈RS ∥(I −γ ¯Pπ⋆) (¯v⋆−Φ v)∥1,σ . The proof is based on a minor modification of Theorem 4.1 and is deferred until the appendix. Theorem 4.4 improves on Theorem 4.1 by entirely replacing the L∞norm by a weighted L1 norm. While the correlation coefficient may not be easy to determine in practice, it may a property to analyze to explain a failure of the method. Finite-sample bounds are beyond the scope of this paper. However, the sampling error is additive and can be based for example on ϵ coverage assumptions made for approximate linear programs. In particular, (4.2) represents an approximate linear program and can be bounded as such, as for example done by Petrik et al. (2010). 5 Experimental Results In this section, we experimentally validate the approximation properties of RAAM with respect to the quality of the solutions and the computational time required. For the purpose of the empirical 7 0.0 0.5 1.0 1.5 2.0 Extra Reward rq −60 −40 −20 0 20 40 Mean Return Mean Aggregation/LSPI Robust Aggregation, jj ¢jj1 ∙0:5 Robust Aggregation, jj ¢jj1 ∙1:5 Approximate Linear Programming Figure 3: Sensitivity to the reward perturbation for regular aggregation and RAAM. 101 102 103 104 Variables 10−5 10−4 10−3 10−2 10−1 100 Time (s) CPLEX Total CPLEX Solver Custom Python Custom C++ Figure 4: Time to compute (3.1) for Algorithm 2 versus a CPLEX LP solver. evaluation we use a modified inverted pendulum problem with a discount factor of 0.99, as described for example in (Lagoudakis and Parr, 2003). For the aggregation, we use a uniform grid of dimension 40 × 40 and uniform sampling of dimensions 120 × 120. The ordinary setting is solved easily and reliably by both the standard aggregation and RAAM. To study the robustness with respect to the approximation error of suboptimal policies we add an additional reward ra for the pendulum under a tilted angle (π/2 −0.12 ≤θ ≤π/2 and ¨θ ≥0 where θ is the angle and ¨θ is the action). This reward can be only achieved by a suboptimal policy. Fig. 3 shows the return of the approximate policy as the function of the magnitude of the additional reward for the standard aggregation and RAAM with various values on ω. We omit the confidence ranges, which are small, to enhance image clarity. Note that we assume that once the pendulum goes over π/2, the reward -1 is accrued until the end of the horizon. This result clearly demonstrates the greater stability and robustness of RAAM for than standard aggregation. The results also illustrate the lack of stability of ALP, which is can be seen as an optimistic version of RAAM. We observed the same behavior for other parameter choices. The main cost of using RAAM compared to ordinary aggregation is the increased computational complexity. Our results show, however, that the computational overhead of RAAM is minimal. Section 5 shows that Algorithm 2 is several orders of magnitude faster than CPLEX 12.3. The value function update for the aggregated inverted pendulum with 1600 states, 3 actions, and about 9 robust outcomes takes 8.7ms for ordinary aggregation, 8.8ms for RAAM with ω = 2, and 9.7ms for RAAM with ω = 1. The guarantees on the improvement for one iteration are the same for both algorithms and all implementations are in C++. 6 Conclusion RAAM is novel approach to state aggregation which leverages RMDPs. RAAM significantly reduces performance loss guarantees in comparison with standard aggregation while introducing negligible computational overhead. The robust approach has some distinct advantages in comparison with previous methods with improved performance loss guarantees. Our experimental results are encouraging and show that adding robustness can significantly improve the solution quality. Clearly, not all problems will benefit from this approach. However, given the small computational overhead and there is no reason to not try. While we do provide some theoretical justification for choosing w and ω, it is most likely that in practice these can be best treated as regularization parameters. Many improvements on the basic RAAM algorithm are possible. Most notably, the RMDP action set could be based on “meta-actions” or “options”. The L1 may be replaced by other polynomial norms or KL divergence. RAAM could be also extended to choose adaptively the most appropriate aggregation for the given samples (Bernstein and Shikim, 2008). Finally, using s-rectangular uncertainty sets may lead to better results. Acknowledgments We thank Ban Kawas for extensive discussions on this topic and the anonymous reviewers for their comments that helped to significantly improve the paper. 8 References Bean, J. J. C., Birge, J. R. J., and Smith, R. R. L. (1987). Aggregation in dynamic programming. Operations Research, 35(2), 215–220. Bernstein, A. and Shikim, N. (2008). Adaptive aggregation for reinforcement learning with efficient exploration: Deterministic domains. In Conference on Learning Theory (COLT). Bertsekas, D. P. D. and Castanon, D. A. (1989). Adaptive aggregation methods for infinite horizon dynamic programming. IEEE Transations on Automatic Control, 34, 589–598. de Farias, D. P. and Van Roy, B. (2003). The linear programming approach to approximate dynamic programming. Operations Research, 51(6), 850–865. Desai, V. V., Farias, V. F., and Moallemi, C. C. (2012). Approximate dynamic programming via a smoothed linear program. Operations Research, 60(3), 655–674. Filar, J. and Vrieze, K. (1997). Competitive Markov Decision Processes. Springer. Gordon, G. J. (1995). Stable function approximation in dynamic programming. In International Conference on Machine Learning, pages 261–268. Carnegie Mellon University. Hansen, T., Miltersen, P., and Zwick, U. (2013). Strategy iteration is strongly polynomial for 2player turn-based stochastic games with a constant discount factor. Journal of the ACM (JACM), 60(1), 1–16. Iyengar, G. N. (2005). Robust dynamic programming. Mathematics of Operations Research, 30(2), 257–280. Kaufman, D. L. and Schaefer, A. J. (2013). Robust modified policy iteration. INFORMS Journal on Computing, 25(3), 396–410. Lagoudakis, M. G. and Parr, R. (2003). Least-squares policy iteration. Journal of Machine Learning Research, 4, 1107–1149. Le Tallec, Y. (2007). Robust, Risk-Sensitive, and Data-driven Control of Markov Decision Processes. Ph.D. thesis, MIT. Mannor, S., Mebel, O., and Xu, H. (2012). Lightning does not strike twice: Robust MDPs with coupled uncertainty. In International Conference on Machine Learning. Marecki, J., Petrik, M., and Subramanian, D. (2013). Solution methods for constrained Markov decision process with continuous probability modulation. In Uncertainty in Artificial Intelligence (UAI). Munos, R. (2005). Performance bounds in Lp norm for approximate value iteration. In National Conference on Artificial Intelligence (AAAI). Nilim, A. and Ghaoui, L. E. (2005). Robust control of Markov decision processes with uncertain transition matrices. Operations Research, 53(5), 780–798. Petrik, M. (2012). Approximate dynamic programming by minimizing distributionally robust bounds. In International Conference of Machine Learning. Petrik, M. and Zilberstein, S. (2009). Constraint relaxation in approximate linear programs. In International Conference on Machine Learning, New York, New York, USA. ACM Press. Petrik, M., Taylor, G., Parr, R., and Zilberstein, S. (2010). Feature selection using regularization in approximate linear programs for Markov decision processes. In International Conference on Machine Learning. Porteus, E. L. (2002). Foundations of Stochastic Inventory Theory. Stanford Business Books. Puterman, M. L. (2005). Markov decision processes: Discrete stochastic dynamic programming. John Wiley & Sons, Inc. Tsitsiklis, J. N. and Van Roy, B. (1996). An analysis of temporal-difference learning with function approximation. Van Roy, B. (2005). Performance loss bounds for approximate value iteration with state aggregation. Mathematics of Operations Research, 31(2), 234–244. Wiesemann, W., Kuhn, D., and Rustem, B. (2013). Robust Markov decision processes. Mathematics of Operations Research, 38(1), 153–183. 9
|
2014
|
176
|
5,265
|
On Prior Distributions and Approximate Inference for Structured Variables Oluwasanmi Koyejo Psychology Dept., Stanford sanmi@stanford.edu Rajiv Khanna ECE Dept., UT Austin rajivak@utexas.edu Joydeep Ghosh ECE Dept., UT Austin ghosh@ece.utexas.edu Russell A. Poldrack Psychology Dept., Stanford poldrack@stanford.edu Abstract We present a general framework for constructing prior distributions with structured variables. The prior is defined as the information projection of a base distribution onto distributions supported on the constraint set of interest. In cases where this projection is intractable, we propose a family of parameterized approximations indexed by subsets of the domain. We further analyze the special case of sparse structure. While the optimal prior is intractable in general, we show that approximate inference using convex subsets is tractable, and is equivalent to maximizing a submodular function subject to cardinality constraints. As a result, inference using greedy forward selection provably achieves within a factor of (1-1/e) of the optimal objective value. Our work is motivated by the predictive modeling of high-dimensional functional neuroimaging data. For this task, we employ the Gaussian base distribution induced by local partial correlations and consider the design of priors to capture the domain knowledge of sparse support. Experimental results on simulated data and high dimensional neuroimaging data show the effectiveness of our approach in terms of support recovery and predictive accuracy. 1 Introduction Data in scientific and commercial disciplines are increasingly characterized by high dimensions and relatively few samples. For such cases, a-priori knowledge gleaned from expertise and experimental evidence are invaluable for recovering meaningful models. In particular, knowledge of restricted degrees of freedom such as sparsity or low rank has become an important design paradigm, enabling the recovery of parsimonious and interpretable results, and improving storage and prediction efficiency for high dimensional problems. In Bayesian models, such restricted degrees of freedom can be captured by incorporating structural constraints on the design of the prior distribution. Prior distributions for structured variables can be designed by combining conditional distributions - each capturing portions of the problem structure, into a hierarchical model. In other cases, researchers design special purpose prior distributions to match the application at hand. In the case of sparsity, an example of the former approach is the spike and slab prior [1, 2], and an example of the latter approach is the horseshoe prior [3]. We describe a framework for designing prior distributions when the a-priori information include structural constraints. Our framework follows the maximum entropy principle [4, 5]. The distribution is chosen as one that incorporates known information, but is as difficult as possible to discriminate from the base distribution with respect to relative entropy. The maximum entropy approach 1 has been especially successful with domain knowledge expressed as expectation constraints. In such cases, the solution is given by a member of the exponential family [6, 7] e.g. quadratic constraints result in the Gaussian distribution. Our work extends this framework to the design of prior distributions when a-priori information include domain constraints. Our main technical contributions are as follows: • We show that under standard assumptions, the information projection of a base density to domain constraints is given by its restriction (Section 2). • We show the equivalence between relative entropy inference with data observation constraints and Bayes rule for continuous variables • When such restriction is intractable, we propose a family of parameterized approximations indexed by subsets of the domain (Section 2.1). We consider approximate inference in the special case of sparse structure: • We characterize the restriction precisely, showing that it is given by a conditional distribution (Section 3). • We show that the approximate sparse support estimation problem is submodular. As a result, greedy forward selection is efficient and guarantees (1- 1 e) factor optimality (Section 3.1). Our work is motivated by the predictive modeling of high-dimensional functional neuroimaging data, measured by cognitive neuroscientists for analyzing the human brain. The data are represented using hundreds of thousands of variables. Yet due to real world constraints, most experimental datasets contain only a few data samples [8]. The proposed approach is applied to predictive modeling of simulated data and high-dimensional neuroimaging data, and is compared to Bayesian hierarchical models and non-probabilistic sparse predictive models, showing superior support recovery and predictive accuracy (Section 4). Due to space constraints, all proofs are provided in the supplement. 1.1 Preliminaries This section includes notation and a few basic definitions. Vectors are denoted by lower case x and matrices by capital X. xi,j denotes the (i, j)th entry of the matrix X. xi,: denotes the ith row of X and x:,j denotes the jth column. Let |X| denote the determinant of X. Sets are denoted by sans serif e.g. S. The reals are denoted by R. [n] denotes the set of integers {1, . . . , n}, and ℘(n) denotes the power set of [n]. Let X be either a countable set, or a complete separable metric space equipped with the standard Borel σ-algebra of measurable set. Let P denote the set of probability densities on X i.e. positive functions P = {p : X 7→[0, 1] , R X p(x) = 1}. For the remainder of this paper, we make the following assumption: Assumption 1. All distributions P are absolutely continuous with respect to the dominating measure ν so there exists a density p ∈P that satisfies dP = pdν. To simplify notation, we use use the standard dν = dx. As a consequence of Assumption 1, the relative entropy is given in terms of the densities as: KL(q∥p) = Z X q(x) log q(x) p(x)dx. The relative entropy is strictly convex with respect to its first argument. The information projection of a probability density p to a constraint set A is given by the solution of: inf q∈P KL(q∥p) s.t. q ∈A. We will only consider projections where A is a closed convex set so the infimum is achieved. The delta function, denoted by δ(·), is a generalized set function that satisfies R X δA(x)f(x)dx = R A f(x)dx, and R X δA(x)dx = 1, for some some A ⊆X. The set of domain restricted densities, denoted by FA for A ⊂X, is the set of probability density functions supported on A i.e. 2 FA = {q ∈P | q(x) = 0 ∀x /∈A} ∪{δ{x} ∀x ∈A} ⊂FA ⊂P = FX. Further, note that FA is closed and convex for any A ⊆X (including nonconvex A). Restriction is a standard approach for defining distributions on subsets A ⊆X. An important special case we will consider is when A is a measure zero subset of X. The common conditional density is one such example, the existence of which follows from the disintegration theorem [9]. Restrictions of measure require extensive technical tools in the general case [10]. We will employ the following simplifying condition for the remainder of this manuscript: Condition 2. The sample space X is a subset of Euclidean space with ν given by the Lebesgue measure. Alternatively, X is a countable set with ν given by the counting measure. Let P be a probability distribution on X. Under Assumption 1 and Condition 2, the restriction of the density p to the set A ⊂X is given by: q(x) = ( p(x) R A p(x)dx x ∈A, 0 otherwise. 2 Priors for structured variables We assume a-priori information identifying the structure of X via the sub-domain A ⊂X. We also assume a pre-defined base distribution P with associated density p. Without loss of generality, let p have support everywhere1 on X i.e. p(x) > 0 ∀x ∈X. Following the principle of minimum discrimination information, we select the prior as the information projection of the base density p to FA. Our first result identifies the equivalence between information projection subject to domain constraints and density restriction. Theorem 3. Under Condition 2, the information projection of the density p to the constraint set FA is the restriction of p to the domain A. Theorem 3 gives principled justification for the domain restriction approach to structured prior design. Examples of density restriction in the literature include the truncated Gaussian, Beta and Gamma densities [11], and the restriction of the matrix-variate Gaussian to the manifold of low rank matrices [12]. Various properties of the restriction, such as its shape, and tail behavior (up to re-scaling) follow directly from the base density. Thus the properties of the resulting prior are more amenable to analysis when the base measure is well understood. Next, we consider a corollary of Theorem 3 that was introduced by Williams [13]. Corollary 4. Consider the product space X = W × Y. Let domain constraint be given by W × {ˆy} for some ˆy ∈Y. Under Condition 2, the information projection of p to FW×{ˆy} is given by p(w|ˆy)δˆy. In the Bayesian literature, p(w) is known as the prior, p(y|w) is the likelihood and p(w|ˆy) is the posterior density given the observation y = ˆy. Corollary 4 considers the information projection of the joint density p(w, y) given observed data, and shows that the solution recovers the Bayesian posterior. Williams [13] considered a generalization of Corollary 4, but did not consider projection to data constraints2. While Corollary 4 has been widely applied in the literature e.g. [14], to the best of our knowledge, the presented result is the first formal proof. 2.1 Approximate inference for structured variables via tractable subsets For many structural constraints of interest, restriction requires the computation of an intractable normalization constant. In theory, rejection sampling and Markov Chain Monte Carlo (MCMC) inference methods [15] do not require normalized probabilities. However, as many structured subdomains are measure zero sets with respect to the dominating measure, randomly generated samples generated from the base distribution are unlikely to lie in the constrained domains e.g. random samples from a multivariate Gaussian are not sparse. Hence rejection sampling fails, and MCMC suffers from low acceptance probabilities. As a result, inference on such structured sub-domains 1When this condition is violated, we simply redefine X as the subdomain supporting p. 2Specifically, Williams [13] noted “Relative information has been defined only for unconditional distributions, which say nothing about the relative probabilities of events of probability zero.“ 3 (a) Gaussian restriction P p FC pA∩C FA pA (b) Sequential projections Figure 1: (a) Gaussian density and restriction to diagonal line shown. (b) Illustration of Theorem 5; sequence of information projections P →FA →FC and P →FA∩C are equivalent. typically requires specialized methods e.g. [11, 12]. In the following, we propose a class of variational approximations based on an inner representation of the structured subdomain. Let {Si ∈A} represent a (possibly overlapping) partitioning of A into subsets. We define the domain restricted density sets generated by these partitions as FSi, and their union D = S FSi. Note that by definition each FSi ⊆D ⊆FA ⊆FX. Our approach is to approximate the optimization over densities in FA by optimizing over D - a smaller subset of tractable densities. Approximate inference is generally most successful when the approximation accounts for observed data. Inspired by the results of Corollary 4, we consider such a projection. Let pA(w, y) be the information projection of the joint distribution p(x, y) to the set FA×{ˆy}. We propose approximate inference via the following rule: pS∗,ˆy = arg min q∈D×F{ˆy} KL(q(w, y)∥pA(w, y)) = arg min S min q∈FS×{ˆy} KL(q(w, y)∥pA(w, y)) . (1) Our proposed approach may be decomposed into two steps. The inner step is solved by estimating a parameterized set of prior densities {qS} corresponding to choices of S, and the outer step is solved by the selection of the optimal subset S∗. The solution is given by pS∗,ˆy(w, y) = pS∗(w|ˆy)δˆy (Corollary 4) with the associated approximate posterior given by pS∗(w|ˆy). The following theorem considers the effect of a sequence of domain constrained information projections (see Fig. 1b), which will useful for subsequent results. Theorem 5. Let π : [n] 7→[n] be a permutation function and {Cπ(i) | Cπ(i) ⊂X} represent a sequence of sets with non empty intersection B = T Ci ̸= ∅. Given a base density p, let q0 = p, and define the sequence of information projections: qi = arg min q∈FCπ(i) KL(q∥qi−1). Under Condition 2, q∗= qN is independent of π. Further q∗= min q∈FB KL(q∥p). We apply Theorem 5 to formulate equivalent solutions of (1) that may be simpler to solve. Corollary 6. Let pS∗,ˆy(w, y) be the solution of (1), then the posterior distribution pS∗(w|ˆy) is given by: pS∗(w|ˆy) = arg min q∈D KL(q(w)∥pA(w|ˆy)) = arg min q∈D KL(q(w)∥p(w|ˆy)). (2) Corollary 6 implies that we can estimate the approximate structured posterior directly as the information projection of the unstructured posterior distribution p(w|ˆy). Upon further examination, Corollary 6 also suggests that the proposed approximation is most useful when there exist subsets of A such that the restriction of the base density to each subset leads to tractable inference. Further, the result is most accurate when one of the subsets S∗∈A captures most of the posterior probability mass. When the optimal subset S∗is known, the structured prior density associated with the structured posterior can be computed as shown in the following corollary. 4 Corollary 7. Let pS∗,ˆy(w, y) be the solution of (1). Define the density pS∗(w) as: pS∗(w) = arg min q∈FS∗ KL(q(w)∥pA(w)) = arg min q∈FS∗ KL(q(w)∥p(w)). (3) then pS∗(w) is the prior distribution corresponding to the Bayesian posterior pS∗(w|ˆy). 3 Priors for sparse structure We now consider a special case of the proposed framework for sparse structured variables. A d dimensional variable x ∈X is k-sparse if d −k of its entries take a default value of ci i.e |{i | xi = ci}| = d−k. In Euclidean space X = Rd and in most cases, ci = 0 ∀i. Similarly, the distribution P on the domain X is k-sparse if all random variables X ∼P are at most k-sparse. The support of x ∈ X is the set supp(x) = {i | xi ̸= ci} ∈℘(d). Let S ⊂X denote the set of variables with support s i.e. S = {x ∈X s.t. supp(x) = s}. We will use the notation xS = {xi | i ∈s}, and its complement xS′ = {xi | i ∈s′}, where s′ = [d]\s. The domain of k sparse vectors is given by the union of all possible d! (d−k)!k! sparse support sets as A = S Si. While the sparse domain A is non-convex, each subset S is a convex set, in fact given by linear subspaces with basis {ei | i ∈s}. Further, while the information projection of a base density p to A is generally intractable, the information projection to its convex subsets S turn out to be computationally tractable. We investigate the application of the proposed approximation scheme using these subsets. Consider the information projection of an arbitrary probability measure P with density3 p to the set D = S FSi given by: min q∈D KL(q∥p) = min S∈{Si} min q∈FS KL(q∥p) = min S∈{Si} KL(pS∥p). Applying Theorem 3, we can compute that pS = p(x)δS(x)/Z, where Z is a normalization factor: Z = Z S p(x) = Z X p(xS, xS′)δS(x) = Z X p(xS|xS′)p(xS′)δS(x) = p(xS′ = cS′). Thus, the normalization factor is a marginal density at xS′ = cS′. We may now compute the restriction explicitly: pS(x) = p(xS|xS′)p(xS′)δS(x) p(xS′ = cS′) = p(xS|xS′ = cS′)δS(x). (4) In other words, the information projection to a sparse support domain is the density of xS conditioned on xS′ = cS′. The resulting gap is: KL(pS∥p) = Z S pS(x) log pS(x) p(x) = Z S pS(x) log p(x) p(x)p(xS′ = cS′) = −log p(xS′ = cS′). Thus, for a given target sparsity k, we solve: s∗= arg max |s|=k J(s), where J(s) = log p(xS′ = cS′). (5) 3.1 Submodularity and Efficient Inference In this section, we show that the cost function J(s) is monotone submodular, and describe the greedy forward selection algorithm for efficient inference. Let F : ℘(d) 7→R represent a set function. F is normalized if F(∅) = 0. A bounded F can be normalized as ˜F(s) = F(s) −F(∅) with no effect on optimization. F is monotonic, if for all subsets u ⊂v ⊆℘(d) it holds that F(u) ≤F(v). F is submodular, if for all subsets u, v ⊆m it holds that F(u ∪v) + F(u ∩v) ≤F(u) + F(v). Submodular functions have a diminishing returns property [16] i.e. the marginal gain of adding elements decreases with the size of the set. Theorem 8. Let J : ℘(d) 7→R, J(s) = log p(xS′ = cS′), and define ˜J(s) = J(s) −J(∅), then ˜J(s) is normalized and monotone submodular. 3Where p may represent the conditional densities as in Section 2.1. To simplify the discussion, we suppress the dependence on ˆy. 5 While constrained maximization of submodular functions is generally NP-hard, a simple greedy forward selection heuristic has been shown to perform almost as well as the optimal in practice, and is known to have strong theoretical guarantees. Theorem 9 (Nemhauser et al. [16]). In the case of any normalized, monotonic submodular function F, the set s∗obtained by the greedy algorithm achieves at least a constant fraction 1 −1 e of the objective value obtained by the optimal solution i.e. F(s∗) = 1 −1 e max |s|≤k F(s). In addition, no polynomial time algorithm can provide a better approximation guarantee unless P = NP [17]. An additional benefit of the greedy approach is that it does not require the decision of the support size k to be made at training time. As an anytime algorithm, training can be stopped at any k based on computational constraints, while still returning meaningful results. An interesting special case occurs when the base density takes a product form. Corollary 10. Let J(s) be defined as in Theorem 8 and suppose the base density is product form i.e. p(x) = Qd i=1 p(xi), then J(s) is linear. In particular, define h = {p(xi = 0) ∀i ∈[d]}, then the solution of (5) is given by set of dimensions associated with the smallest k values of h. 4 Experiments We present experimental results comparing the proposed sparse approximate inference projection to other sparsity inducing models. We performed experiments to test the models ability to estimate the support of the reconstructed targets and the predictive regression accuracy. The regression accuracy was measured using the coefficient of determination R2 = 1 −P(ˆy −y)2/ P(y −¯y)2 where y is the target response with sample mean ¯y and ˆy is the predicted response. R2 measures the gain in predictive accuracy compared to a mean model and has a maximum value of 1. The support recovery was measured using the AUC of the recovered support with respect to the true s∗. The baseline models are: (i) regularized least squares (Ridge), (ii) least absolute shrinkage and selection (Lasso) [18], (iii) automatic relevance determination (ARD) [19], (iv) Spike and Slab [1, 2]. Ridge and Lasso were optimized using implementations from the scikit-learn python package [20]. While Ridge does not return sparse weights, it was included as a baseline for regression performance. We implemented ARD using iterative re-weighted Lasso as suggested by Wipf and Nagarajan [19]. The noise variance hyperparameter for Ridge and ARD were selected from the set 10{−4,−3,...,4}. Lasso was evaluated using the default scikit-learn implementation where the hyperparameter is selected from 100 logarithmically spaced values based on the maximum correlation between the features and the response. For each of these models, the hyperparameter was selected in an inner 5-fold cross validation loop. For speed and scalability, we used a publicly available implementation of Spike and Slab [21], which uses a mean field variational approximation. In addition to the weights, Spike and Slab estimates the probability that each dimension is non zero. As Spike and Slab does not return sparse estimates, sparsity was estimated by thresholding this posterior at 0.5 for each dimension (SpikeSlab0.5 ), we also tested the full spike and slab posterior prediction for regression performance alone (SpikeSlabFull). The proposed projection approach is designed to be applicable to any probabilistic model. Thus, we applied the projection approach as additional post-processing for the two Bayesian model baselines. The first method is a projection of the standard Gaussian regression posterior (Sparse-G ) (more details in supplement). The second is a projection of the spike and spike and slab approximate posterior (SpikeSlabKL). We note that since the spike and slab approximate posterior uses the mean field approximation, the posterior distribution is in product form and the projection is straightforward using Corollary 10. Support size selection: The selection of the hyperparameter k - specifying the sparsity, can be solved by standard model selection routines such as cross-validation. We found that support size selection using sequential Bayes factors [22] was particularly effective, thus the support size was selected as the first k where log p(y|Sk+1) −log p(y|Sk) < ϵ. 6 5 10 15 20 n:k 0.5 0.6 0.7 0.8 0.9 1.0 Support AUC Sparse-G Lasso ARD SpikeSlabKL SpikeSlab0.5 (a) AUC as a function of n:k ratio 5 10 15 20 n:k 0.0 0.2 0.4 0.6 0.8 1.0 R2 Sparse-G Lasso Ridge ARD SpikeSlabFull SpikeSlab0.5 SpikeSlabKL (b) R2 as a function of n:k ratio 40 30 20 10 0 -10 Signal-to-Noise Ratio(dB) 0.5 0.6 0.7 0.8 0.9 1.0 Support AUC Sparse-G Lasso SpikeSlab0.5 SpikeSlabKL ARD (c) AUC as a function of SNR 40 30 20 10 0 -10 Signal-to-Noise Ratio(dB) 0.0 0.2 0.4 0.6 0.8 1.0 R2 Sparse-G Lasso Ridge SpikeSlab0.5 SpikeSlabKL ARD SpikeSlabFull (d) R2 as a function of SNR Figure 2: Simulated data performance: support recovery (AUC ) and regression (R2 ). 4.1 Simulated Data We generated random high dimensional feature vectors ai ∈Rd with ai,j ∼N (0, 1). The response was generated as yi = w⊤ai + νi where νi represents independent additive noise with νi ∼N 0, σ2 for all i ∈[n]. We set σ2 implicitly via the signal to noise ration (SNR) as SNR = var(y)/σ2, where var(y) is the variance of y. In each experiment, we sampled a sparse weight vector w by sampling k dimensions at random with from [d], then we sampled values wi ∼N (0, 1) and set other dimensions to zero. We performed a series of tests to investigate the performance of the model in different scenarios. Each experiment was run 10 times with separate training and test sets. We present the average results on the test set. Our first experiment tested the performance of all models with limited samples. Here we set k = 20, d = 10, 000 and an SNR of 20dB. The number of training values was varied from n = 100, . . . , 400 with 200 test samples. Fig. 2a shows the model performance in terms of support recovery. With limited training samples, Sparse-G outperformed all the baselines including Lasso. We also found that SpikeSlabKL consistently outperformed SpikeSlab0.5. We speculate that the significant gap between Sparse-G and SpikeSlabKL may be partly due to the mean field assumption in the underlying Spike and Slab. Fig. 2b shows the corresponding regression performance. Again, we found that Sparse-G outperformed all other baselines, with Ridge achieving the worst performance. Our second experiment tested the performance of all models with high levels of noise. Here we set k = 20, d = 10, 000 and n = 200 with 200 test samples. We varied the SNR from 40dB to −10dB (note that σ2 increases as SNR is decreased). Fig. 2c shows the support recovery performance of the different models. We found a performance gap between Sparse-G and Lasso, more pronounced than in the small sample test. The SpikeSlab0.5 was the worst performing model, but the performance was improved by SpikeSlabKL. Only Sparse-G achieved perfect support recovery at low noise (high SNR) levels. The regression performance is shown in Fig. 2d. While ARD and Lasso matched Sparse-G at low noise levels (high SNR), their performance degraded much faster at higher noise levels (low SNR). 4.2 Functional Neuroimaging Data Functional magnetic resonance imaging (fMRI) is an important tool for non-invasive study of brain activity. fMRI studies involve measurements of blood oxygenation (which are sensitive to the 7 Figure 3: Support selected by Sparse-G applied to fMRI data with 100,000 voxels. Slices are across the vertical dimension. Selected voxels are in red. amount of local neuronal activity) while the participant is presented with a stimulus or cognitive task. Neuroimaging signals are then analyzed to identify which brain regions which exhibit a systematic response to the stimulation, and thus to infer the functional properties of those brain regions [23]. Functional neuroimaging datasets typically consist of a relatively small number of correlated high dimensional brain images. Hence, capturing the inherent structural properties of the imaging data is critical for robust inference. FMRI data were collected from 126 participants while the subjects performed a stop-signal task [24]. For each subject, contrast images were computed for “go” trials and successful “stop” trials using a general linear model with FMRIB Software Library (FSL), and these contrast images were used for regression against estimated stop-signal reaction times. We used the normalized Laplacian of the 3dimensional spatial graph of the brain image voxels to define the precision matrix. This corresponds to the observation that nearby voxels tend to have similar functional activation. We present the 10fold cross validation performance of all models tested on this data. We tested all models using the high dimensional 100,000 voxel brain image and measured average predictive R2 . The results are: Sparse-G (0.051), Lasso (-0.271), Ridge (-0.473), ARD (-0.478). The negative test R2 for baseline models show worse predictive performance than the test mean predictor, and indicate the difficulty of this task. Even with the mean field variational inference, the Spike and Slab models did not scale to this dataset. Only Sparse-G achieved a positive R2 . The support selected by Sparse-G with all 100,000 voxels is shown in Fig. 3, sliced across the vertical dimension. The recovered voxels show biologically plausible brain locations including the orbitofrontal cortex, dorsolateral prefrontal cortex, putamen, anterior cingulate, and parietal cortex, which are correlated with the observed response. Further neuroscientific interpretation and validation will be included in an extended version of the paper. 5 Conclusion We present a principled approach for enforcing structure in Bayesian models via structured prior selection based on the maximum entropy principle. The prior is defined by the information projection of the base measure to the set of distributions supported on the constraint domain. We focus on the case of sparse structure. While the optimal prior is intractable in general, we show that approximate inference using selected convex subsets is equivalent to maximizing a submodular function subject to cardinality constraints, and propose an efficient greedy forward selection procedure which is guaranteed to achieve within a (1−1 e) factor of the global optimum. For future work, we plan to explore applications of our approach with other structural constraints such as low rank and structured sparsity for matrix-variate sample spaces. We also plan to explore more complicated base distributions on other samples spaces. Acknowledgments: fMRI data was provided by the Consortium for Neuropsychiatric Phenomics (NIH Roadmap for Medical Research grants UL1-DE019580, RL1MH083269, RL1DA024853, PL1MH083271). 8 References [1] T.J. Mitchell and J.J. Beauchamp. Bayesian variable selection in linear regression. JASA, 83(404):1023– 1032, 1988. [2] H. Ishwaran and J.S. Rao. Spike and slab variable selection: frequentist and bayesian strategies. Annals of Statistics, pages 730–773, 2005. [3] C. M Carvalho, N.G. Polson, and J.G. Scott. The horseshoe estimator for sparse signals. Biometrika, 97 (2):465–480, 2010. [4] E.T. Jaynes. Information Theory and Statistical Mechanics. Physical Review Online Archive, 106(4): 620–630, 1957. [5] S. Kullback. Information Theory and Statistics. Dover, 1959. [6] D. MacKay. Information Theory, Inference and Learning Algorithms. Cambridge University Press, 2003. [7] O. Koyejo and J. Ghosh. A representation approach for relative entropy minimization with expectation constraints. In ICML WDDL workshop, 2013. [8] R.A. Poldrack. Inferring mental states from neuroimaging data: From reverse inference to large-scale decoding. Neuron, 72(5):692–697, 2011. [9] J.T. Chang and D. Pollard. Conditioning as disintegration. Statistica Neerlandica, 51(3):287–317, 1997. [10] A.N. Kolmogorov. Foundations of the theory of probability. Chelsea, New York, 1933. [11] P. Damien and S.G. Walker. Sampling truncated normal, beta, and gamma densities. J. of Computational and Graphical Statistics, 10(2), 2001. [12] M. Park and J. Pillow. Bayesian inference for low rank spatiotemporal neural receptive fields. In NIPS, pages 2688–2696. 2013. [13] P. Williams. Bayesian conditionalisation and the principle of minimum information. The British Journal for the Philosophy of Science, 31(2):131–144, 1980. [14] O. Koyejo and J. Ghosh. Constrained Bayesian inference for low rank multitask learning. In UAI, 2013. [15] C.P. Robert, G. Casella, and C.P. Robert. Monte Carlo statistical methods, volume 58. Springer New York, 1999. [16] G.L. Nemhauser, L.A. Wolsey, and M.L. Fisher. An analysis of approximations for maximizing submodular set functions. Mathematical Programming, 14(1):265–294, 1978. [17] U. Feige. A threshold of ln n for approximating set cover. Journal of the ACM, 45(4):634–652, 1998. [18] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, pages 267–288, 1996. [19] D. Wipf and S. Nagarajan. A new view of automatic relevance determination. In NIPS, pages 1625–1632, 2007. [20] F. et. al. Pedregosa. Scikit-learn: Machine learning in Python. JMLR, 12:2825–2830, 2011. [21] Michalis K Titsias and Miguel L´azaro-Gredilla. Spike and slab variational inference for multi-task and multiple kernel learning. In NIPS, volume 24, pages 2339–2347, 2011. [22] Robert E Kass and Adrian E Raftery. Bayes factors. JASA, 90(430):773–795, 1995. [23] T.M. Mitchell, R. Hutchinson, R.S. Niculescu, F. Pereira, X. Wang, M. Just, and S. Newman. Learning to decode cognitive states from brain images. Mach. Learn., 57(1-2):145–175, 2004. [24] Corey N White, Eliza Congdon, Jeanette A Mumford, Katherine H Karlsgodt, Fred W Sabb, Nelson B Freimer, Edythe D London, Tyrone D Cannon, Robert M Bilder, and Russell A Poldrack. Decomposing decision components in the stop-signal task: A model-based approach to individual differences in inhibitory control. Journal of Cognitive Neuroscience, 2014. 9
|
2014
|
177
|
5,266
|
A Residual Bootstrap for High-Dimensional Regression with Near Low-Rank Designs Miles E. Lopes Department of Statistics University of California, Berkeley Berkeley, CA 94720 mlopes@stat.berkeley.edu Abstract We study the residual bootstrap (RB) method in the context of high-dimensional linear regression. Specifically, we analyze the distributional approximation of linear contrasts c⊤(bβρ −β), where bβρ is a ridge-regression estimator. When regression coefficients are estimated via least squares, classical results show that RB consistently approximates the laws of contrasts, provided that p ≪n, where the design matrix is of size n × p. Up to now, relatively little work has considered how additional structure in the linear model may extend the validity of RB to the setting where p/n ≍1. In this setting, we propose a version of RB that resamples residuals obtained from ridge regression. Our main structural assumption on the design matrix is that it is nearly low rank — in the sense that its singular values decay according to a power-law profile. Under a few extra technical assumptions, we derive a simple criterion for ensuring that RB consistently approximates the law of a given contrast. We then specialize this result to study confidence intervals for mean response values X⊤ i β, where X⊤ i is the ith row of the design. More precisely, we show that conditionally on a Gaussian design with near low-rank structure, RB simultaneously approximates all of the laws X⊤ i (bβρ −β), i = 1, . . . , n. This result is also notable as it imposes no sparsity assumptions on β. Furthermore, since our consistency results are formulated in terms of the Mallows (Kantorovich) metric, the existence of a limiting distribution is not required. 1 Introduction Until recently, much of the emphasis in the theory of high-dimensional statistics has been on “first order” problems, such as estimation and prediction. As the understanding of these problems has become more complete, attention has begun to shift increasingly towards “second order” problems, dealing with hypothesis tests, confidence intervals, and uncertainty quantification [1–6]. In this direction, much less is understood about the effects of structure, regularization, and dimensionality — leaving many questions open. One collection of such questions that has attracted growing interest deals with the operating characteristics of the bootstrap in high dimensions [7–9] . Due to the fact that bootstrap is among the most widely used tools for approximating the sampling distributions of test statistics and estimators, there is much practical value in understanding what factors allow for the bootstrap to succeed in the high-dimensional regime. The regression model and linear contrasts. In this paper, we focus our attention on highdimensional linear regression, and our aim is to know when the residual bootstrap (RB) method consistently approximates the laws of linear contrasts. (A review of RB is given in Section 2.) 1 To specify the model, suppose that we observe a response vector Y ∈Rn, generated according to Y = Xβ + ε, (1) where X ∈Rn×p is the observed design matrix, β ∈Rp is an unknown vector of coefficients, and the error variables ε = (ε1, . . . , εn) are drawn i.i.d. from an unknown distribution F0, with mean 0 and unknown variance σ2 < ∞. As is conventional in high-dimensional statistics, we assume the model (1) is embedded in a sequence of models indexed by n. Hence, we allow X, β, and p to vary implicitly with n. We will leave p/n unconstrained until Section 3.3, where we will assume p/n ≍1 in Theorem 3, and then in Section 3.4, we will assume further that p/n is bounded strictly between 0 and 1. The distribution F0 is fixed with respect to n, and none of our results require F0 to have more than four moments. Although we are primarily interested in cases where the design matrix X is deterministic, we will also study the performance of the bootstrap conditionally on a Gaussian design. For this reason, we will use the symbol E[. . . |X] even when the design is non-random so that confusion does not arise in relating different sections of the paper. Likewise, the symbol E[. . . ] refers to unconditional expectation over all sources of randomness. Whenever the design is random, we will assume X ⊥⊥ε, denoting the distribution of X by PX, and the distribution of ε by Pε. Within the context of the regression, we will be focused on linear contrasts c⊤(bβ−β), where c ∈Rp is a fixed vector and bβ ∈Rp is an estimate of β. The importance of contrasts arises from the fact that they unify many questions about a linear model. For instance, testing the significance of the ith coefficient βi may be addressed by choosing c to be the standard basis vector c⊤= e⊤ i . Another important problem is quantifying the uncertainty of point predictions, which may be addressed by choosing c⊤= X⊤ i , i.e. the ith row of the design matrix. In this case, an approximation to the law of the contrast leads to a confidence interval for the mean response value E[Yi] = X⊤ i β. Further applications of contrasts occur in the broad topic of ANOVA [10]. Intuition for structure and regularization in RB. The following two paragraphs explain the core conceptual aspects of the paper. To understand the role of regularization in applying RB to highdimensional regression, it is helpful to think of RB in terms of two ideas. First, if bβLS denotes the ordinary least squares estimator, then it is a simple but important fact that contrasts can be written as c⊤(bβLS −β) = a⊤ε where a⊤:= c⊤(X⊤X)−1X⊤. Hence, if it were possible to sample directly from F0, then the law of any such contrast could be easily determined. Since F0 is unknown, the second key idea is to use the residuals of some estimator bβ as a proxy for samples from F0. When p ≪n, the least-squares residuals are a good proxy [11, 12]. However, it is well-known that leastsquares tends to overfit when p/n ≍1. When bβLS fits “too well”, this means that its residuals are “too small”, and hence they give a poor proxy for F0. Therefore, by using a regularized estimator bβ, overfitting can be avoided, and the residuals of bβ may offer a better way of obtaining “approximate samples” from F0. The form of regularized regression we will focus on is ridge regression: bβρ := (X⊤X + ρIp×p)−1X⊤Y, (2) where ρ > 0 is a user-specificed regularization parameter. As will be seen in Sections 3.2 and 3.3, the residuals obtained from ridge regression lead to a particularly good approximation of F0 when the design matrix X is nearly low-rank, in the sense that most of its singular values are close to 0. In essence, this condition is a form of sparsity, since it implies that the rows of X nearly lie in a low-dimensional subspace of Rp. However, this type of structural condition has a significant advantage over the the more well-studied assumption that β is sparse. Namely, the assumption that X is nearly low-rank can be inspected directly in practice — whereas sparsity in β is typically unverifiable. In fact, our results will impose no conditions on β, other than that ∥β∥2 remains bounded as (n, p) →∞. Finally, it is worth noting that the occurrence of near low-rank design matrices is actually very common in applications, and is often referred to as collinearity [13, ch. 17]. Contributions and outline. The primary contribution of this paper is a complement to the work of Bickel and Freedman [12] (hereafter B&F 1983) — who showed that in general, the RB method fails 2 to approximate the laws of least-squares contrasts c⊤(bβLS −β) when p/n ≍1. Instead, we develop an alternative set of results, proving that even when p/n ≍1, RB can successfully approximate the laws of “ridged contrasts” c⊤(bβρ −β) for many choices of c ∈Rp, provided that the design matrix X is nearly low rank. A particularly interesting consequence of our work is that RB successfully approximates the law c⊤(bβρ −β) for a certain choice of c that was shown in B&F 1983 to “break” RB when applied to least-squares. Specifically, such a c can be chosen as one of the rows of X with a high leverage score (see Section 4). This example corresponds to the practical problem of setting confidence intervals for mean response values E[Yi] = X⊤ i β. (See [12, p. 41], as well as Lemma 2 and Theorem 4 in Section 3.4). Lastly, from a technical point of view, a third notable aspect of our results is that they are formulated in terms of the Mallows-ℓ2 metric, which frees us from having to impose conditions that force a limiting distribution to exist. Apart from B&F 1983, the most closely related works we are aware of are the recent papers [7] and [8], which also consider RB in the high-dimensional setting. However, these works focus on role of sparsity in β and do not make use of low-rank structure in the design, whereas our work deals only with structure in the design and imposes no sparsity assumptions on β. The remainder of the paper is organized as follows. In Section 2, we formulate the problem of approximating the laws of contrasts, and describe our proposed methodology for RB based on ridge regression. Then, in Section 3 we state several results that lay the groundwork for Theorem 4, which shows that that RB can successfully approximate all of the laws L(X⊤ i (bβρ −β)|X), i = 1, . . . , n, conditionally on a Gaussian design. Due to space constraints, all proofs are deferred to material that will appear in a separate work. Notation and conventions. If U and V are random variables, then L(U|V ) denotes the law of U, conditionally on V . If an and bn are two sequences of real numbers, then the notation an ≲bn means that there is an absolute constant κ0 > 0 and an integer n0 ≥1 such that an ≤κ0bn for all n ≥n0. The notation an ≍bn means that an ≲bn and bn ≲an. For a square matrix A ∈Rk×k whose eigenvalues are real, we denote them by λmin(A) = λk(A) ≤· · · ≤λ1(A) = λmax(A). 2 Problem setup and methodology Problem setup. For any c ∈Rp, it is clear that conditionally on X, the law of c⊤(bβρ −β) is completely determined by F0, and hence it makes sense to use the notation Ψρ(F0; c) := L c⊤(bβρ −β) X . (3) The problem we aim to solve is to approximate the distribution Ψρ(F0; c) for suitable choices of c. Review of the residual bootstrap (RB) procedure. We briefly explain the steps involved in the residual bootstrap procedure, applied to the ridge estimator bβρ of β. To proceed somewhat indirectly, consider the following “bias-variance” decomposition of Ψρ(F0; c), conditionally on X, Ψρ(F0; c) = L c⊤ bβρ −E[bβρ|X] X | {z } =: Φρ(F0;c) + c⊤ E[bβρ|X] −β | {z } =: bias(Φρ(F0;c)) . (4) Note that the distribution Φ(F0; c) has mean zero, and so that the second term on the right side is the bias of Φρ(F0; c) as an estimator of Ψρ(F0; c). Furthermore, the distribution Φρ(F0; c) may be viewed as the “variance component” of Ψρ(F0; c). We will be interested in situations where the regularization parameter ρ may be chosen small enough so that the bias component is small. In this case, one has Ψρ(F0; c) ≈Φρ(F0; c), and then it is enough to find an approximation to the law Φρ(F0; c), which is unknown. To this end, a simple manipulation of c⊤(bβρ −E[bβρ]) leads to Φρ(F0; c) = L(c⊤(X⊤X + ρIp×p)−1X⊤ε X). (5) Now, to approximate Φρ(F0; c), let bF be any centered estimate of F0. (Typically, bF is obtained by using the centered residuals of some estimator of β, but this is not necessary in general.) Also, let ε∗= (ε∗ 1, . . . , ε∗ n) ∈Rn be an i.i.d. sample from bF. Then, replacing ε with ε∗in line (5) yields Φρ( bF; c) = L(c⊤(X⊤X + ρIp×p)−1X⊤ε∗ X). (6) 3 At this point, we define the (random) measure Φρ( bF; c) to be the RB approximation to Φρ(F0; c). Hence, it is clear that the RB approximation is simply a “plug-in rule”. A two-stage approach. An important feature of the procedure just described is that we are free to use any centered estimator bF of F0. This fact offers substantial flexibility in approximating Ψρ(F0; c). One way of exploiting this flexibility is to consider a two-stage approach, where a “pilot” ridge estimator bβϱ is used to first compute residuals whose centered empirical distribution function is bFϱ, say. Then, in the second stage, the distribution bFϱ is used to approximate Φρ(F0; c) via the relation (6). To be more detailed, if (be1(ϱ), . . . , ben(ϱ)) = be(ϱ) := Y −X bβϱ are the residuals of bβϱ, then we define bFϱ to be the distribution that places mass 1/n at each of the values bei(ϱ) −¯e(ϱ) with ¯e(ϱ) := 1 n Pn i=1 bei(ϱ). Here, it is important to note that the value ϱ is chosen to optimize bFϱ as an approximation to F0. By contrast, the choice of ρ depends on the relative importance of width and coverage probability for confidence intervals based on Φρ( bFϱ; c). Theorems 1, 3, and 4 will offer some guidance in selecting ϱ and ρ. Resampling algorithm. To summarize the discussion above, if B is user-specified number of bootstrap replicates, our proposed method for approximating Ψρ(F0; c) is given below. 1. Select ρ and ϱ, and compute the residuals be(ϱ) = Y −X bβϱ. 2. Compute the centered distribution function bFϱ, putting mass 1/n at each bei(ϱ) −¯e(ϱ). 3. For j = 1, . . . , B: • Draw a vector ε∗∈Rn of n i.i.d. samples from bFϱ. • Compute zj := c⊤(X⊤X + ρIp×p)−1X⊤ε∗. 4. Return the empirical distribution of z1, . . . , zB. Clearly, as B →∞, the empirical distribution of z1, . . . , zB converges weakly to Φρ( bFϱ; c), with probability 1. As is conventional, our theoretical analysis in the next section will ignore Monte Carlo issues, and address only the performance of Φρ( bFϱ; c) as an approximation to Ψρ(F0; c). 3 Main results The following metric will be central to our theoretical results, and has been a standard tool in the analysis of the bootstrap, beginning with the work of Bickel and Freedman [14]. The Mallows (Kantorovich) metric. For two random vectors U and V in a Euclidean space, the Mallows-ℓ2 metric is defined by d2 2(L(U), L(V )) := inf π∈Π n E h ∥U −V ∥2 2 i : (U, V ) ∼π o (7) where the infimum is over the class Π of joint distributions π whose marginals are L(U) and L(V ). It is worth noting that convergence in d2 is strictly stronger than weak convergence, since it also requires convergence of second moments. Additional details may be found in the paper [14]. 3.1 A bias-variance decomposition for bootstrap approximation To give some notation for analyzing the bias-variance decomposition of Ψρ(F0; c) in line (4), we define the following quantities based upon the ridge estimator bβρ. Namely, the variance is vρ = vρ(X; c) := var(Ψρ(F0; c)|X) = σ2∥c⊤(X⊤X + ρIp×p)−1X⊤∥2 2. To express the bias of Φρ(F0; c), we define the vector δ(X) ∈Rp according to δ(X) := β −E[bβρ] = Ip×p −(X⊤X + ρIp×p)−1X⊤X β, (8) 4 and then put b2 ρ = b2 ρ(X; c) := bias2(Φρ(F0; c)) = (c⊤δ(X))2. (9) We will sometimes omit the arguments of vρ and b2 ρ to lighten notation. Note that vρ(X; c) does not depend on β, and b2 ρ(X; c) only depends on β through δ(X). The following result gives a regularized and high-dimensional extension of some lemmas in Freedman’s early work [11] on RB for least squares. The result does not require any structural conditions on the design matrix, or on the true parameter β. Theorem 1 (consistency criterion). Suppose X ∈Rn×p is fixed. Let bF be any estimator of F0, and let c ∈Rp be any vector such that vρ = vρ(X; c) ̸= 0. Then with Pε-probability 1, the following inequality holds for every n ≥1, and every ρ > 0, d2 2 1 √vρ Ψρ(F0; c), 1 √vρ Φρ( bF; c) ≤ 1 σ2 d2 2(F0, bF) + b2 ρ vρ . (10) Remarks. Observe that the normalization 1/√vρ ensures that the bound is non-trivial, since the distribution Ψρ(F0; c)/√vρ has variance equal to 1 for all n (and hence does not become degenerate for large n). To consider the choice of ρ, it is simple to verify that the ratio b2 ρ/vρ decreases monotonically as ρ decreases. Note also that as ρ becomes small, the variance vρ becomes large, and likewise, confidence intervals based on Φρ( bF; c) become wider. In other words, there is a trade-off between the width of the confidence interval and the size of the bound (10). Sufficient conditions for consistency of RB. An important practical aspect of Theorem 1 is that for any given contrast c, the variance vρ(X; c) can be easily estimated, since it only requires an estimate of σ2, which can be obtained from bF. Consequently, whenever theoretical bounds on d2 2(F0, bF) and b2 ρ(X; c) are available, the right side of line (10) can be controlled. In this way, Theorem 1 offers a simple route for guaranteeing that RB is consistent. In Sections 3.2 and 3.3 to follow, we derive a bound on E[d2 2(F0, bF)|X] in the case where bF is chosen to be bFϱ. Later on in Section 3.4, we study RB consistency in the context of prediction with a Gaussian design, and there we derive high probability bounds on both vρ(X; c) and b2 ρ(X; c) where c is a particular row of X. 3.2 A link between bootstrap consistency and MSPE If bβ is an estimator of β, its mean-squared prediction error (MSPE), conditionally on X, is defined as mspe(bβ |X) := 1 nE ∥X(bβ −β)∥2 2 X . (11) The previous subsection showed that in-law approximation of contrasts is closely tied to the approximation of F0. We now take a second step of showing that if the centered residuals of an estimator bβ are used to approximate F0, then the quality of this approximation can be bounded naturally in terms of mspe(bβ |X). This result applies to any estimator bβ computed from the observations (1). Theorem 2. Suppose X ∈Rn×p is fixed. Let bβ be any estimator of β, and let bF be the empirical distribution of the centered residuals of bβ. Also, let Fn denote the empirical distribution of n i.i.d. samples from F0. Then for every n ≥1, E d2 2( bF, F0) X ≤2 mspe(bβ |X) + 2 E[d2 2(Fn, F0)] + 2σ2 n . (12) Remarks. As we will see in the next section, the MSPE of ridge regression can be bounded in a sharp way when the design matrix is approximately low rank, and there we will analyze mspe(bβϱ|X) for the pilot estimator. Consequently, when near low-rank structure is available, the only remaining issue in controlling the right side of line (12) is to bound the quantity E[d2 2(Fn, F0)|X]. The very recent work of Bobkov and Ledoux [15] provides an in-depth study of this question, and they derive a variety bounds under different tail conditions on F0. We summarize one of their results below. Lemma 1 (Bobkov and Ledoux, 2014). If F0 has a finite fourth moment, then E[d2 2(Fn, F0)] ≲log(n)n−1/2. (13) 5 Remarks. The fact that the squared distance is bounded at the rate of log(n)n−1/2 is an indication that d2 is a rather strong metric on distributions. For a detailed discussion of this result, see Corollaries 7.17 and 7.18 in the paper [15]. Although it is possible to obtain faster rates when more stringent tail conditions are placed on F0, we will only need a fourth moment, since the mspe(bβ|X) term in Theorem 2 will often have a slower rate than log(n)n−1/2, as discussed in the next section. 3.3 Consistency of ridge regression in MSPE for near low rank designs In this subsection, we show that when the tuning parameter ϱ is set at a suitable rate, the pilot ridge estimator bβϱ is consistent in MSPE when the design matrix is near low-rank — even when p/n is large, and without any sparsity constraints on β. We now state some assumptions. A1. There is a number ν > 0, and absolute constants κ1, κ2 > 0, such that κ1i−ν ≤λi(bΣ) ≤κ2i−ν for all i = 1, . . . , n ∧p. A2. There are absolute constants θ, γ > 0, such that for every n ≥1, ϱ n = n−θ and ρ n = n−γ. A3. The vector β ∈Rp satisfies ∥β∥2 ≲1. Due to Theorem 2, the following bound shows that the residuals of bβϱ may be used to extract a consistent approximation to F0. Two other notable features of the bound are that it is non-asymptotic and dimension-free. Theorem 3. Suppose that X ∈Rn×p is fixed and that Assumptions 1–3 hold, with p/n ≍1. Assume further that θ is chosen as θ = 2ν 3 when ν ∈(0, 1 2), and θ = ν ν+1 when ν > 1 2. Then, mspe(bβϱ|X) ≲ ( n−2ν 3 if ν ∈(0, 1 2), n− ν ν+1 if ν > 1 2. (14) Also, both bounds in (14) are tight in the sense that β can be chosen so that bβϱ attains either rate. Remarks. Since the eigenvalues λi(bΣ) are observable, they may be used to estimate ν and guide the selection of ϱ/n = n−θ. However, from a practical point of view, we found it easier to select ϱ via cross-validation in numerical experiments, rather than via an estimate of ν. A link with Pinsker’s Theorem. In the particular case when F0 is a centered Gaussian distribution, the “prediction problem” of estimating Xβ is very similar to estimating the mean parameters of a Gaussian sequence model, with error measured in the ℓ2 norm. In the alternative sequence-model format, the decay condition on the eigenvalues of 1 nX⊤X translates into an ellipsoid constraint on the mean parameter sequence [16, 17]. For this reason, Theorem 3 may be viewed as “regression version” of ℓ2 error bounds for the sequence model under an ellipsoid constraint (cf. Pinsker’s Theorem, [16, 17]). Due to the fact that the latter problem has a very well developed literature, there may be various “neighboring results” elsewhere. Nevertheless, we could not find a direct reference for our stated MSPE bound in the current setup. For the purposes of our work in this paper, the more important point to take away from Theorem 3 is that it can be coupled with Theorem 2 for proving consistency of RB. 3.4 Confidence intervals for mean responses, conditionally on a Gaussian design In this section, we consider the situation where the design matrix X has rows X⊤ i ∈Rp drawn i.i.d. from a multivariate normal distribution N(0, Σ), with X ⊥⊥ε. (The covariance matrix Σ may vary with n.) Conditionally on a realization of X, we analyze the RB approximation of the laws Ψρ(F0; Xi) = L(X⊤ i (bβρ −β)|X). As discussed in Section 1, this corresponds to the problem of setting confidence intervals for the mean responses E[Yi] = X⊤ i β. Assuming that the population eigenvalues λi(Σ) obey a decay condition, we show below in Theorem 4 that RB succeeds with high PX-probability. Moreover, this consistency statement holds for all of the laws Ψρ(F0; Xi) simultaneously. That is, among the n distinct laws Ψρ(F0; Xi), i = 1, . . . , n, even the worst bootstrap approximation is still consistent. We now state some population-level assumptions. 6 A4. The operator norm of Σ ∈Rp×p satisfies ∥Σ∥op ≲1. Next, we impose a decay condition on the eigenvalues of Σ. This condition also ensures that Σ is invertible for each fixed p — even though the bottom eigenvalue may become arbitrarily small as p becomes large. It is important to notice that we now use η for the decay exponent of the population eigenvalues, whereas we used ν when describing the sample eigenvalues in the previous section. A5. There is a number η > 0, and absolute constants k1, k2 > 0, such that for all i = 1, . . . , p, k1i−η ≤λi(Σ) ≤k2i−η. A6. There are absolute constants k3, k4 ∈(0, 1) such that for all n ≥3, we have the bounds k3 ≤p n ≤k4 and p ≤n −2. The following lemma collects most of the effort needed in proving our final result in Theorem 4. Here it is also helpful to recall the notation ρ/n = n−γ and ϱ/n = n−θ from Assumption 2. Lemma 2. Suppose that the matrix X ∈Rn×p has rows X⊤ i drawn i.i.d. from N(0, Σ), and that Assumptions 2–6 hold. Furthermore, assume that γ chosen so that 0 < γ < min{η, 1}. Then, the statements below are true. (i) (bias inequality) Fix any τ > 0. Then, there is an absolute constant κ0 > 0, such that for all large n, the following event holds with PX-probability at least 1 −n−τ −ne−n/16, max 1≤i≤n b2 ρ(X; Xi) ≤κ0 · n−γ · (τ + 1) log(n + 2). (15) (ii) (variance inequality) There are absolute constants κ1, κ2 > 0 such that for all large n, the following event holds with PX-probability at least 1 −4n exp(−κ1n γ η ), max 1≤i≤n 1 vρ(X;Xi) ≤κ2n1−γ η . (16) (iii) (mspe inequalities) Suppose that θ is chosen as θ = 2η/3 when η ∈(0, 1 2), and that θ is chosen as θ = η 1+η when η > 1 2. Then, there are absolute constants κ3, κ4, κ5, κ6 > 0 such that for all large n, mspe(bβϱ|X) ≤ ( κ4n−2η 3 with PX-probability at least 1 −exp(−κ3n2−4η/3), if η ∈(0, 1 2) κ6n− η η+1 with PX-probability at least 1 −exp(−κ5n 2 1+η ), if η > 1 2. Remarks. Note that the two rates in part (iii) coincide as η approaches 1/2. At a conceptual level, the entire lemma may be explained in relatively simple terms. Viewing the quantities mspe(bβϱ|X), b2 ρ(X; Xi) and vρ(X; Xi) as functionals of a Gaussian matrix, the proof involves deriving concentration bounds for each of them. Indeed, this is plausible given that these quantities are smooth functionals of X. However, the difficulty of the proof arises from the fact that they are also highly non-linear functionals of X. We now combine Lemmas 1 and 2 with Theorems 1 and 2 to show that all of the laws Ψρ(F0; Xi) can be simultaneously approximated via our two-stage RB method. Theorem 4. Suppose that F0 has a finite fourth moment, Assumptions 2–6 hold, and γ is chosen so that η 1+η < γ < min{η, 1}. Also suppose that θ is chosen as θ = 2η/3 when η ∈(0, 1 2), and θ = η η+1 when η > 1 2. Then, there is a sequence of positive numbers δn with limn→∞δn = 0, such that the event E h max 1≤i≤n d2 2 1 √vρ Ψρ(F0; Xi), 1 √vρ Φρ( bFϱ; Xi) X i ≤δn (17) has PX-probability tending to 1 as n →∞. Remark. Lemma 2 gives explicit bounds on the numbers δn, as well as the probabilities of the corresponding events, but we have stated the result in this way for the sake of readability. 7 4 Simulations In four different settings of n, p, and the decay parameter η, we compared the nominal 90% confidence intervals (CIs) of four methods: “oracle”, “ridge”, “normal”, and “OLS”, to be described below. In each setting, we generated N1 := 100 random designs X with i.i.d. rows drawn from N(0, Σ), where λj(Σ) = j−η, j = 1, . . . , p, and the eigenvectors of Σ were drawn randomly by setting them to be the Q factor in a QR decomposition of a standard p × p Gaussian matrix. Then, for each realization of X, we generated N2 := 1000 realizations of Y according to the model (1), where β = 1/∥1∥2 ∈Rp, and F0 is the centered t distribution on 5 degrees of freedom, rescaled to have standard deviation σ = 0.1. For each X, and each corresponding Y , we considered the problem of setting a 90% CI for the mean response value X⊤ i⋆β, where X⊤ i⋆is the row with the highest leverage score, i.e. i⋆= argmax1≤i≤n Hii and H := X(X⊤X)−1X⊤. This problem was shown in B&F 1983 to be a case where the standard RB method based on least-squares fails when p/n ≍1. Below, we refer to this method as “OLS”. To describe the other three methods, “ridge” refers to the interval [X⊤ i⋆bβρ −bq0.95, X⊤ i⋆bβρ −bq0.05], where bqα is the α% quantile of the numbers z1, . . . , zB computed in the proposed algorithm in Section 2, with B = 1000 and c⊤= X⊤ i⋆. To choose the parameters ρ and ϱ for a given X and Y , we first computed br as the value that optimized the MSPE error of a ridge estimator bβr with respect to 5-fold cross validation; i.e. cross validation was performed for every distinct pair (X, Y ). We then put ϱ = 5br and ρ = 0.1br, as we found the prefactors 5 and 0.1 to work adequately across various settings. (Optimizing ϱ with respect to MSPE is motivated by Theorems 1, 2, and 3. Also, choosing ρ to be somewhat smaller than ϱ conforms with the constraints on θ and γ in Theorem 4.) The method “normal” refers to the CI based on the normal approximation L(X⊤ i⋆(bβρ−β)|X) ≈N(0, bτ 2), where bτ 2 = bσ2∥X⊤ i⋆(X⊤X +ρIp×p)−1X⊤∥2 2, ρ = 0.1br, and bσ2 is the usual unbiased estimate of σ2 based on OLS residuals. The “oracle” method refers to the interval [X⊤ i⋆bβρ −˜q0.95, X⊤ i⋆bβρ −˜q0.05], with ρ = 0.1br, and ˜qα being the empirical α% quantile of X⊤ i (bβρ −β) over all 1000 realizations of Y based on a given X. (This accounts for the randomness in ρ = 0.1br.) Within a given setting of the triplet (n, p, η), we refer to the “coverage” of a method as the fraction of the N1×N2 = 105 instances where the method’s CI contained the parameter X⊤ i⋆β. Also, we refer to “width” as the average width of a method’s intervals over all of the 105 instances. The four settings of (n, p, η) correspond to moderate/high dimension and moderate/fast decay of the eigenvalues λi(Σ). Even in the moderate case of p/n = 0.45, the results show that the OLS intervals are too narrow and have coverage noticeably less than 90%. As expected, this effect becomes more pronounced when p/n = 0.95. The ridge and normal intervals perform reasonably well across settings, with both performing much better than OLS. However, it should be emphasized that our study of RB is motivated by the desire to gain insight into the behavior of the bootstrap in high dimensions — rather than trying to outperform particular methods. In future work, we plan to investigate the relative merits of the ridge and normal intervals in greater detail. Table 1: Comparison of nominal 90% confidence intervals oracle ridge normal OLS setting 1 width 0.21 0.20 0.23 0.16 n = 100, p = 45, η = 0.5 coverage 0.90 0.87 0.91 0.81 setting 2 width 0.22 0.26 0.26 0.06 n = 100, p = 95, η = 0.5 coverage 0.90 0.88 0.88 0.42 setting 3 width 0.20 0.21 0.22 0.16 n = 100, p = 45, η = 1 coverage 0.90 0.90 0.91 0.81 setting 4 width 0.21 0.26 0.23 0.06 n = 100, p = 95, η = 1 coverage 0.90 0.92 0.87 0.42 Acknowledgements. MEL thanks Prof. Peter J. Bickel for many helpful discussions, and gratefully acknowledges the DOE CSGF under grant DE-FG02-97ER25308, as well as the NSF-GRFP. 8 References [1] C.-H. Zhang and S. S. Zhang. Confidence intervals for low dimensional parameters in high dimensional linear models. Journal of the Royal Statistical Society: Series B, 76(1):217–242, 2014. [2] A. Javanmard and A. Montanari. Hypothesis testing in high-dimensional regression under the Gaussian random design model: Asymptotic theory. arXiv preprint arXiv:1301.4240, 2013. [3] A. Javanmard and A. Montanari. Confidence intervals and hypothesis testing for highdimensional regression. arXiv preprint arXiv:1306.3171, 2013. [4] P. B¨uhlmann. Statistical significance in high-dimensional linear models. Bernoulli, 19(4):1212–1242, 2013. [5] S. van de Geer, P. B¨uhlmann, and Y. Ritov. On asymptotically optimal confidence regions and tests for high-dimensional models. arXiv preprint arXiv:1303.0518, 2013. [6] J. D. Lee, D. L. Sun, Y. Sun, and J. E. Taylor. Exact inference after model selection via the lasso. arXiv preprint arXiv:1311.6238, 2013. [7] A. Chatterjee and S. N. Lahiri. Rates of convergence of the adaptive lasso estimators to the oracle distribution and higher order refinements by the bootstrap. The Annals of Statistics, 41(3):1232–1259, 2013. [8] H. Liu and B. Yu. Asymptotic properties of lasso+mls and lasso+ridge in sparse highdimensional linear regression. Electronic Journal of Statistics, 7:3124–3169, 2013. [9] V. Chernozhukov, D. Chetverikov, and K. Kato. Gaussian approximations and multiplier bootstrap for maxima of sums of high-dimensional random vectors. The Annals of Statistics, 41(6):2786–2819, 2013. [10] E. L. Lehmann and J. P. Romano. Testing statistical hypotheses. Springer, 2005. [11] D. A. Freedman. Bootstrapping regression models. The Annals of Statistics, 9(6):1218–1228, 1981. [12] P. J. Bickel and D. A. Freedman. Bootstrapping regression models with many parameters. In Festschrift for Erich L. Lehmann, pages 28–48. Wadsworth, 1983. [13] N. R. Draper and H. Smith. Applied regression analysis. Wiley-Interscience, 1998. [14] P. J. Bickel and D. A. Freedman. Some asymptotic theory for the bootstrap. The Annals of Statistics, pages 1196–1217, 1981. [15] S. Bobkov and M. Ledoux. One-dimensional empirical measures, order statistics, and Kantorovich transport distances. preprint, 2014. [16] A. B. Tsybakov. Introduction to nonparametric estimation. Springer, 2009. [17] L. Wasserman. All of nonparametric statistics. Springer, 2006. 9
|
2014
|
178
|
5,267
|
Tighten after Relax: Minimax-Optimal Sparse PCA in Polynomial Time Zhaoran Wang Huanran Lu Han Liu Department of Operations Research and Financial Engineering Princeton University Princeton, NJ 08540 {zhaoran,huanranl,hanliu}@princeton.edu Abstract We provide statistical and computational analysis of sparse Principal Component Analysis (PCA) in high dimensions. The sparse PCA problem is highly nonconvex in nature. Consequently, though its global solution attains the optimal statistical rate of convergence, such solution is computationally intractable to obtain. Meanwhile, although its convex relaxations are tractable to compute, they yield estimators with suboptimal statistical rates of convergence. On the other hand, existing nonconvex optimization procedures, such as greedy methods, lack statistical guarantees. In this paper, we propose a two-stage sparse PCA procedure that attains the optimal principal subspace estimator in polynomial time. The main stage employs a novel algorithm named sparse orthogonal iteration pursuit, which iteratively solves the underlying nonconvex problem. However, our analysis shows that this algorithm only has desired computational and statistical guarantees within a restricted region, namely the basin of attraction. To obtain the desired initial estimator that falls into this region, we solve a convex formulation of sparse PCA with early stopping. Under an integrated analytic framework, we simultaneously characterize the computational and statistical performance of this two-stage procedure. Computationally, our procedure converges at the rate of 1/ √ t within the initialization stage, and at a geometric rate within the main stage. Statistically, the final principal subspace estimator achieves the minimax-optimal statistical rate of convergence with respect to the sparsity level s∗, dimension d and sample size n. Our procedure motivates a general paradigm of tackling nonconvex statistical learning problems with provable statistical guarantees. 1 Introduction We denote by x1, . . . , xn the n realizations of a random vector X ∈Rd with population covariance matrix Σ ∈Rd×d. The goal of Principal Component Analysis (PCA) is to recover the top k leading eigenvectors u∗ 1, . . . , u∗ k of Σ. In high dimensional settings with d ≫n, [1–3] showed that classical PCA can be inconsistent. Additional assumptions are needed to avoid such a curse of dimensionality. For example, when the first leading eigenvector is of primary interest, one common assumption is that u∗ 1 is sparse — the number of nonzero entries of u∗ 1, denoted by s∗, is smaller than n. Under such an assumption of sparsity, significant progress has been made on the methodological development [4–13] as well as theoretical understanding [1, 3, 14–21] of sparse PCA. However, there remains a significant gap between the computational and statistical aspects of sparse PCA: No tractable algorithm is known to attain the statistical optimal sparse PCA estimator provably without relying on the spiked covariance assumption. This gap arises from the nonconvexity of sparse 1 PCA. In detail, the sparse PCA estimator for the first leading eigenvector u∗ 1 is bu1 = argmin ∥v∥2=1 −vT bΣv, subject to ∥v∥0 = s∗, (1) where bΣ is the sample covariance estimator, ∥· ∥2 is the Euclidean norm, ∥· ∥0 gives the number of nonzero coordinates, and s∗is the sparsity level of u∗ 1. Although this estimator has been proven to attain the optimal statistical rate of convergence [15, 17], its computation is intractable because it requires minimizing a concave function over cardinality constraints [22]. Estimating the top k leading eigenvectors is even more challenging because of the extra orthogonality constraint on bu1, . . . , bu2. To address this computational issue, [5] proposed a convex relaxation approach, named DSPCA, for estimating the first leading eigenvector. [13] generalized DSPCA to estimate the principal subspace spanned by the top k leading eigenvectors. Nevertheless, [13] proved the obtained estimator only attains the suboptimal s∗p log d/n statistical rate. Meanwhile, several methods have been proposed to directly address the underlying nonconvex problem (1), e.g., variants of power methods or iterative thresholding methods [10–12], greedy method [8], as well as regression-type methods [4, 6, 7, 18]. However, most of these methods lack statistical guarantees. There are several exceptions: (1) [11] proposed the truncated power method, which attains the optimal p s∗log d/n rate for estimating u∗ 1. However, it hinges on the assumption that the initial estimator u(0) satisfies sin ∠(u(0), u∗) ≤1−C, where C ∈(0, 1) is a constant. Suppose u(0) is chosen uniformly at random on the ℓ2 sphere, this assumption holds with probability decreasing to zero when d →∞[23]. (2) [12] proposed an iterative thresholding method, which attains a near optimal statistical rate when estimating several individual leading eigenvectors. [18] proposed a regression-type method, which attains the optimal principal subspace estimator. However, these two methods hinge on the spiked covariance assumption, and require the data to be exactly Gaussian (sub-Gaussian not included). For them, the spiked covariance assumption is crucial, because they use diagonal thresholding method [1] to obtain the initialization, which would fail when the assumption of spiked covariance doesn’t hold, or each coordinate of X has the same variance. Besides, except [12] and [18], all the computational procedures only recover the first leading eigenvector, and leverage the deflation method [24] to recover the rest, which leads to identifiability and orthogonality issues when the top k eigenvalues of Σ are not distinct. To close the gap between computational and statistical aspects of sparse PCA, we propose a two-stage procedure for estimating the k-dimensional principal subspace U∗spanned by the top k leading eigenvectors u∗ 1, . . . , u∗ k. The details of the two stages are as follows: (1) For the main stage, we propose a novel algorithm, named sparse orthogonal iteration pursuit, to directly estimate the principal subspace of Σ. Our analysis shows, when its initialization falls into a restricted region, namely the basin of attraction, this algorithm enjoys fast optimization rate of convergence, and attains the optimal principal subspace estimator. (2) To obtain the desired initialization, we compute a convex relaxation of sparse PCA. Unlike [5, 13], which calculate the exact minimizers, we early stop the corresponding optimization algorithm as soon as the iterative sequence enters the basin of attraction for the main stage. The rationale is, this convex optimization algorithm converges at a slow sublinear rate towards a suboptimal estimator, and incurs relatively high computational overhead within each iteration. Under a unified analytic framework, we provide simultaneous statistical and computational guarantees for this two-stage procedure. Given the sample size n is sufficiently large, and the eigengap between the k-th and (k + 1)-th eigenvalues of the population covariance matrix Σ is nonzero, we prove: (1) The final subspace estimator bU attained by our two-stage procedure achieves the minimax-optimal p s∗log d/n statistical rate of convergence. (2) Within the initialization stage, the iterative sequence of subspace estimators U(t) T t=0 (at the T-th iteration we early stop the initialization stage) satisfies D U∗, U(t) ≤δ1(Σ) · s∗p log d/n | {z } Statistical Error + δ2(k, s∗, d, n) · 1/ √ t | {z } Optimization Error (2) with high probability. Here D(·, ·) is the subspace distance, while s∗is the sparsity level of U∗, both of which will be defined in §2. Here δ1(Σ) is a quantity which depends on the population covariance matrix Σ, while δ2(k, s∗, d, n) depends on k, s∗, d and n (see §4 for details). (3) Within the main stage, the iterative sequence U(t) T + e T t=T +1 (where eT denotes the total number of iterations of sparse 2 orthogonal iteration pursuit) satisfies D U∗, U(t) ≤δ3(Σ, k) · Optimal Rate z }| { p s∗log d/n | {z } Statistical Error + γ(Σ)(t−T −1)/4 · D U∗, U(T +1) | {z } Optimization Error (3) with high probability, where δ3(Σ, k) is a quantity that only depends on Σ and k, and γ(Σ) = [3λk+1(Σ) + λk(Σ)]/[λk+1(Σ) + 3λk(Σ)] < 1. (4) Here λk(Σ) and λk+1(Σ) are the k-th and (k + 1)-th eigenvalues of Σ. See §4 for more details. Unlike previous works, our theory and method don’t depend on the spiked covariance assumption, or require the data distribution to be Gaussian. U∗ Uinit U(t) Suboptimal Rate Optimal Rate Basin of Attraction Convex Relaxation Sparse Orthogonal Iteration Pursuit Figure 1: An illustration of our two-stage procedure. Our analysis shows, at the initialization stage, the optimization error decays to zero at the rate of 1/ √ t. However, the upper bound of D U∗, U(t) in (2) can’t be smaller than the suboptimal s∗p log d/n rate of convergence, even with infinite number of iterations. This phenomenon, which is illustrated in Figure 1, reveals the limit of the convex relaxation approaches for sparse PCA. Within the main stage, as the optimization error term in (3) decreases to zero geometrically, the upper bound of D U∗, U(t) decreases towards the p s∗log d/n statistical rate of convergence, which is minimax-optimal with respect to the sparsity level s∗, dimension d and sample size n [17]. Moreover, in Theorem 2 we will show that, the basin of attraction for the proposed sparse orthogonal iteration pursuit algorithm can be characterized as U : D U∗, U ≤R = min nq kγ(Σ) 1 −γ(Σ)1/2 /2, p 2γ(Σ)/4 o . (5) Here γ(Σ) is defined in (4) and R denotes its radius. The contribution of this paper is three-fold: (1) We propose the first tractable procedure that provably attains the subspace estimator with minimax-optimal statistical rate of convergence with respect to the sparsity level s∗, dimension d and sample size n, without relying on the restrictive spiked covariance assumption or the Gaussian assumption. (2) We propose a novel algorithm named sparse orthogonal iteration pursuit, which converges to the optimal estimator at a fast geometric rate. The computation within each iteration is highly efficient compared with convex relaxation approaches. (3) We build a joint analytic framework that simultaneously captures the computational and statistical properties of sparse PCA. Under this framework, we characterize the phenomenon of basin of attraction for the proposed sparse orthogonal iteration pursuit algorithm. In comparison with our previous work on nonconvex M-estimators [25], our analysis provides a more general paradigm of solving nonconvex learning problems with provable guarantees. One byproduct of our analysis is novel techniques for analyzing the statistical properties of the intermediate solutions of the Alternating Direction Method of Multipliers [26]. Notation: Let A = [Ai,j] ∈Rd×d and v = (v1, . . . , vd)T ∈Rd. The ℓq norm (q ≥1) of v is ∥v∥q. Specifically, ∥v∥0 gives the number of nonzero entries of v. For matrix A, the i-th largest eigenvalue and singular value are λi(A) and σi(A). For q ≥1, ∥A∥q is the matrix operator q-norm, e.g., we have ∥A∥2 = σ1(A). The Frobenius norm is denoted as ∥A∥F . For A1 and A2, their inner product is ⟨A1, A2⟩= tr(AT 1 A2). For a set S, |S| denotes its cardinality. The d × d identity matrix is Id. 3 For index sets I, J ⊆{1, . . . , d}, we define AI,J ∈Rd×d to be the matrix whose (i, j)-th entry is Ai,j if i ∈I and j ∈J , and zero otherwise. When I = J , we abbreviate it as AI. If I or J is {1, . . . , d}, we replace it with a dot, e.g., AI,·. We denote by Ai,∗∈Rd the i-th row vector of A. A matrix is orthonormal if its columns are unit length orthogonal vectors. The (p, q)-norm of a matrix, denoted as ∥A∥p,q, is obtained by first taking the ℓp norm of each row, and then taking ℓq norm of these row norms. We denote diag(A) to be the vector consisting of the diagonal entries of A. With a little abuse of notation, we denote by diag(v) the the diagonal matrix with v1, . . . , vd on its diagonal. Hereafter, we use generic numerical constants C, C′, C′′, . . ., whose values change from line to line. 2 Background In the following, we introduce the distance between subspaces and the notion of sparsity for subspace. Subspace Distance: Let U and U′ be two k-dimensional subspaces of Rd. We denote the projection matrix onto them by Π and Π′ respectively. One definition of the distance between U and U′ is D(U, U′) = ∥Π −Π′∥F . (6) This definition is invariant to the rotations of the orthonormal basis. Subspace Sparsity: For the k-dimensional principal subspace U∗of Σ, the definition of its sparsity should be invariant to the choice of basis, because Σ’s top k eigenvalues might be not distinct. Here we define the sparsity level s∗of U∗to be the number of nonzero coefficients on the diagonal of its projection matrix Π∗. One can verify that (see [17] for details) s∗= supp[diag(Π∗)] = ∥U∗∥2,0, (7) where ∥· ∥2,0 gives the row-sparsity level, i.e., the number of nonzero rows. Here the columns of U∗ can be any orthonormal basis of U∗. This definition reduces to the sparsity of u∗ 1 when k = 1. Subspace Estimation: For the k-dimensional s∗-sparse principal subspace U∗of Σ, [17] considered the following estimator for the orthonormal matrix U∗consisting of the basis of U∗, bU = argmin U∈Rd×k − bΣ, UUT , subject to U orthonormal, and ∥U∥2,0 ≤s∗, (8) where bΣ is an estimator of Σ. Let bU be the column space of bU. [17] proved that, assuming bΣ is the sample covariance estimator, and the data are independent sub-Gaussian, bU attains the optimal statistical rate. However, direct computation of this estimator is NP-hard even for k = 1 [22]. 3 A Two-stage Procedure for Sparse PCA In this following, we present the two-stage procedure for sparse PCA. We will first introduce sparse orthogonal iteration pursuit for the main stage and then present the convex relaxation for initialization. Algorithm 1 Main stage: Sparse orthogonal iteration pursuit. Here T denotes the total number of iterations of the initialization stage. To unify the later analysis, let t start from T + 1. 1: Function: bU ←Sparse Orthogonal Iteration Pursuit bΣ, Uinit 2: Input: Covariance Matrix Estimator bΣ, Initialization Uinit 3: Parameter: Sparsity Parameter bs, Maximum Number of Iterations eT 4: Initialization: eU(T +1) ←Truncate Uinit, bs , U(T +1), R(T +1) 2 ←Thin QR eU(T +1) 5: For t = T + 1, . . . , T + eT −1 6: eV(t+1) ←bΣ · U(t), V(t+1), R(t+1) 1 ←Thin QR eV(t+1) 7: eU(t+1) ←Truncate V(t+1), bs , U(t+1), R(t+1) 2 ←Thin QR eU(t+1) 8: End For 9: Output: bU ←U(T + e T ) 4 Sparse Orthogonal Iteration Pursuit: For the main stage, we propose sparse orthogonal iteration pursuit (Algorithm 1) to solve (8). In Algorithm 1, Truncate(·, ·) (Line 7) is defined in Algorithm 2. In Lines 6 and 7, Thin QR(·) denotes the thin QR decomposition (see [27] for details). In detail, V(t+1) ∈Rd×k and U(t+1) ∈Rd×k are orthonormal matrices, and they satisfy V(t+1) · R(t+1) 1 = eV(t+1), and U(t+1) · R(t+1) 2 = eU(t+1), where R(t+1) 1 , R(t+1) 2 ∈Rk×k. This decomposition can be accomplished with O(k2d) operations using Householder algorithm [27]. Here remind that k is the rank of the principal subspace of interest, which is much smaller than the dimension d. Algorithm 1 consists of two steps: (1) Line 6 performs a matrix multiplication and a renormalization using QR decomposition. This step is named orthogonal iteration in numerical analysis [27]. When the first leading eigenvector (k = 1) is of interest, it reduces to the well-known power iteration. The intuition behind this step can be understood as follows. We consider the minimization problem in (8) without the row-sparsity constraint. Note that the gradient of the objective function is −2bΣ · U(t). Hence, the gradient descent update scheme for this problem is eV(t+1) ←Porth U(t) + η · 2bΣ · U(t) , (9) where η is the step size, and Porth(·) denotes the renormalization step. [28] showed that the optimal step size η is infinity. Thus we have Porth U(t)+η·2bΣ·U(t) =Porth η·2bΣ·U(t) =Porth bΣ·U(t) , which implies that (9) is equivalent to Line 6. (2) In Line 7, we take a truncation step to enforce the row-sparsity constraint in (8). In detail, we greedily select the bs most important rows. To enforce the orthonormality constraint in (8), we perform another renormalization step after the truncation. Note that the QR decomposition in Line 7 gives a both orthonormal and row-sparse U(t+1), because eU(t+1) is row-sparse by truncation, and QR decomposition preserves its row-sparsity. By iteratively performing these two steps, we are approximately solving the nonconvex problem in (8). Although it is not clear whether this procedure achieves the global minimum of (8), we will prove that, the obtained estimator enjoys the same optimal statistical rate of convergence as the global minimum. Algorithm 2 Main stage: The Truncate(·, ·) function used in Line 7 of Algorithm 1. 1: Function: eU(t+1) ←Truncate V(t+1), bs 2: Row Sorting: Ibs ←The set of row index i′s with the top bs largest
V(t+1) i,∗
2’s 3: Truncation: eU(t+1) i,∗ ←1 i ∈Ibs · V(t+1) i,∗ , for all i ∈{1, . . . , d} 4: Output: eU(t+1) Algorithm 3 Initialization stage: Solving convex relaxation (10) using ADMM. In Lines 6 and 7, we need to solve two subproblems. The first one is equivalent to projecting Φ(t)−Θ(t)+bΣ/ρ to A. This projection can be computed using Algorithm 4 in [29]. The second can be solved by entry-wise soft-thresholding shown in Algorithm 5 in [29]. We defer these two algorithms and their derivations to the extended version [29] of this paper. 1: Function: Uinit ←ADMM bΣ 2: Input: Covariance Matrix Estimator bΣ 3: Parameter: Regularization Parameter ρ>0 in (10), Penalty Parameter β >0 of the Augmented Lagrangian, Maximum Number of Iterations T 4: Π(0) ←0, Φ(0) ←0, Θ(0) ←0 5: For t = 0, . . . , T −1 6: Π(t+1)←argmin L Π, Φ(t), Θ(t) + β/2 ·
Π −Φ(t)
2 F Π ∈A 7: Φ(t+1)←argmin L Π(t+1), Φ, Θ(t) + β/2 ·
Π(t+1) −Φ
2 F Φ ∈B 8: Θ(t+1)←Θ(t) −β Π(t+1) −Φ(t+1) 9: End For 10: Π(T ) ←1/T · PT t=0 Π(t), let the columns of Uinit be the top k leading eigenvectors of Π(T ) 11: Output: Uinit ∈Rd×k 5 Convex Relaxation for Initialization: To obtain a good initialization for sparse orthogonal iteration pursuit, we consider the following convex minimization problem proposed by [5, 13] minimize n − bΣ, Π + ρ∥Π∥1,1 tr(Π) = k, 0 ⪯Π ⪯Id o , (10) which relaxes the combinatorial optimization problem in (8). The intuition behind this relaxation can be understood as follows: (1) Π is a reparametrization for UUT in (8), which is a projection matrix with k nonzero eigenvalues of 1. In (10), this constraint is relaxed to tr(Π) = k and 0 ⪯Π ⪯Id, which indicates that the eigenvalues of Π should be in [0, 1] while the sum of them is k. (2) For the row-sparsity constraint in (8), [13] proved that ∥Π∗∥0,0 ≤|supp[diag(Π∗)]|2 = ∥U∗∥2 2,0 = (s∗)2. Correspondingly, the row-sparsity constraint in (8) translates to ∥Π∥0,0 ≤(s∗)2, which is relaxed to the regularization term ∥Π∥1,1 in (10). For notational simplicity, we define A = Π: Π ∈Rd×d, tr(Π) = k, 0 ⪯Π ⪯Id . (11) Note (10) has both nonsmooth regularization term and nontrivial constraint A. We use the Alternating Direction Method of Multipliers (ADMM, Algorithm 3). It considers the equivalent form of (10) minimize n − bΣ, Π + ρ∥Φ∥1,1 Π = Φ, Π ∈A, Φ ∈B o , where B = Rd×d, (12) and iteratively minimizes the augmented Lagrangian L(Π, Φ, Θ) + β/2 · ∥Π −Φ∥2 F , where L(Π, Φ, Θ) = − bΣ, Π + ρ∥Φ∥1,1 −⟨Θ, Π −Φ⟩, Π ∈A, Φ ∈B, Θ ∈Rd×d (13) is the Lagrangian corresponding to (12), Θ ∈Rd×d is the Lagrange multiplier associated with the equality constraint Π = Φ, and β > 0 is a penalty parameter that enforces such an equality constraint. Note that other variants of ADMM, e.g., Peaceman-Rachford Splitting Method [30] is also applicable, which would yield similar theoretical guarantees along with improved practical performance. 4 Theoretical Results To describe our results, we define the model class Md(Σ, k, s∗) as follows, Md(Σ, k, s∗): X = Σ1/2Z, where Z ∈Rd is sub-Gaussian with mean zero, variance proxy less than 1, and covariance matrix Id; The k-dimensional principal subspace U∗of Σ is s∗-sparse; λk(Σ)−λk+1(Σ)>0. where Σ1/2 satisfies Σ1/2·Σ1/2 = Σ. Here remind the sparsity of U∗is defined in (7) and λj(Σ) is the j-th eigenvalue of Σ. For notational simplicity, hereafter we abbreviate λj(Σ) as λj. This model class doesn’t restrict Σ to spiked covariance matrices, where the (k + 1), . . . , d-th eigenvalues of Σ can only be identical. Moreover, we don’t require X to be exactly Gaussian, which is a crucial requirement in several previous works, e.g., [12, 18]. We first introduce some notation. Remind D(·, ·) is the subspace distance defined in (6). Note that γ(Σ) < 1 is defined in (4) and will be abbreviated as γ hereafter. We define nmin = C · (s∗)2 log d · min nq k · γ(1 −γ1/2)/2, p 2γ/4 o2 · (λk −λk+1)2/λ2 1, (14) which denotes the required sample complexity. We also define ζ1 = [Cλ1/(λk−λk+1)] · s∗p log d/n, ζ2 = h 4/ p λk−λk+1 i · k · s∗· d2 log d/n 1/4 , (15) which will be used in the analysis of the first stage, and ξ1 = C √ k · [λk/(λk −λk+1)]2 · hp λ1λk+1/(λk −λk+1) i · p s∗·(k + log d)/n, (16) which will be used in the analysis of the main stage. Meanwhile, remind the radius of the basin of attraction for sparse orthogonal iteration pursuit is defined in (5). We define Tmin = ζ2 2/(R −ζ1)2 , eTmin = 4 ⌈log(R/ξ1)/log(1/γ)⌉ (17) as the required minimum numbers of iterations of the two stages respectively. The following results will be proved in the extended version [29] of this paper accordingly. Main Result: Recall that U(t) denotes the subspace spanned by the columns of U(t) in Algorithm 1. 6 Theorem 1. Let x1, . . . , xn be independent realizations of X ∈Md(Σ, k, s∗) with n ≥nmin, and bΣ be the sample covariance matrix. Suppose the regularization parameter ρ = Cλ1 p log d/n for a sufficiently large C > 0 in (10) and the penalty parameter β of ADMM (Line 3 of Algorithm 3) is β = d · ρ/ √ k. Also, suppose the sparsity parameter bs in Algorithm 1 (Line 3) is chosen such that bs = C max 4k/(γ−1/2 −1)2 , 1 · s∗, where C ≥1 is an integer constant. After T ≥Tmin iterations of Algorithm 3 and then eT ≥eTmin iterations of Algorithm 1, we obtain bU = U(T + e T ) and D U∗, bU ≤Cξ1 = C′√ k · [λk/(λk −λk+1)]2 · hp λ1λk+1/(λk −λk+1) i · p s∗·(k + log d)/n with high probability. Here the equality follows from the definition of ξ1 in (16). Minimax-Optimality: To establish the optimality of Theorem 1, we consider a smaller model class f Md(Σ, k, s∗, κ), which is the same as Md(Σ, k, s∗) except the eigengap of Σ satisfies λk −λk+1 > κλk for some constant κ>0. This condition is mild compared to previous works, e.g., [12] assumes λk −λk+1 ≥κλ1, which is more restrictive because λ1 ≥λk. Within f M, we assume that the rank k of the principal subspace is fixed. This assumption is reasonable, e.g., in applications like population genetics [31], the rank k of principal subspaces represents the number of population groups, which doesn’t increase when the sparsity level s∗, dimension d and sample size n are growing. Theorem 3.1 of [17] implies the following minimax lower bound inf e U sup X∈f Md(Σ,k,s∗) E D eU, U∗2 ≥Cλ1λk+1/(λk−λk+1)2 · (s∗−k) · k + log[(d−k)/(s∗−k)] /n, where eU denotes any principal subspace estimator. Suppose s∗and d are sufficiently large (to avoid trivial cases), the right-hand side is lower bounded by C′λ1λk+1/(λk−λk+1)2·s∗(k+1/4·log d)/n. By Lemma 2.1 in [29], we have D U∗, bU ≤ √ 2k. For n, d and s∗sufficiently large, it is easy to derive the same upper bound in expectation from in Theorem 1. It attains the minimax lower bound above within f Md(Σ, k, s∗, κ), up to the 1/4 constant in front of log d and a total constant of k · κ−4. Analysis of the Main Stage: Remind that U(t) is the subspace spanned by the columns of U(t) in Algorithm 1, and the initialization is Uinit while its column space is Uinit. Theorem 2. Under the same condition as in Theorem 1, and provided that D U∗, Uinit ≤R, the iterative sequence U(T +1), U(T +2), . . . , U(t), . . . satisfies D U∗, U(t) ≤ Cξ1 |{z} Statistical Error + γ(t−T −1)/4 · γ−1/2R | {z } Optimization Error (18) with high probability, where ξ1 is defined in (16), R is defined in (5), and γ is defined in (4). Theorem 2 shows that, as long as Uinit falls into its basin of attraction, sparse orthogonal iteration pursuit converges at a geometric rate of convergence in optimization error since γ < 1. According to the definition of γ in (4), when λk is close to λk+1, γ is close to 1, then the optimization error term decays at a slower rate. Here the optimization error doesn’t increase with dimension d, which makes this algorithm suitable to solve ultra high dimensional problems. In (18), when t is sufficiently large such that γ(t−T −1)/4·γ−1/2R≤ξ1, D U∗, U(t) is upper bounded by 2Cξ1, which gives the optimal statistical rate. Solving t in this inequality, we obtain that t = eT ≥eTmin, which is defined in (17). Analysis of the Initialization Stage: Let Π(t) = 1/t·Pt i=1 Π(i) where Π(i) is defined in Algorithm 3. Let U(t) be the k-dimensional subspace spanned by the top k leading eigenvectors of Π(t). Theorem 3. Under the same condition as in Theorem 1, the iterative sequence of k-dimensional subspaces U(0), U(1), . . . , U(t), . . . satisfies D U∗, U(t) ≤ ζ1 |{z} Statistical Error + ζ2 · 1/ √ t | {z } Optimization Error (19) with high probability. Here ζ1 and ζ2 are defined in (15). 7 In Theorem 3 the optimization error term decays to zero at the rate of 1/ √ t. Note that ζ2 increases with d at the rate of √ d · (log d)1/4. That is to say, computationally convex relaxation is less efficient than sparse orthogonal iteration pursuit, which justifies the early stopping of ADMM. To ensure U(T ) enters the basin of attraction, we need ζ1 + ζ2/ √ T ≤R. Solving T gives T ≥Tmin where Tmin is defined in (17). The proof of Theorem 3 is a nontrivial combination of optimization and statistical analysis under the variational inequality framework, which is provided in the extended version [29] of this paper with detail. 10 20 30 1.5 2 2.5 3 t D(U ∗, U (t)) Initial Stage 5 10 15 20 10 −1 10 0 t D(U ∗, U (t)) Main Stage 10 20 30 10 0 t D(U ∗, U (t)) −D(U ∗, U (T +e T)) Initial Stage Main Stage 1 1.5 2 0.2 0.4 0.6 0.8 1 p s∗log d/n D(U ∗, bU) n = 60 d = 128 d = 192 d = 256 0.60.8 1 1.21.41.61.8 0.2 0.4 0.6 p s∗log d/n D(U ∗, bU) n = 100 d = 128 d = 192 d = 256 (a) (b) (c) (d) (e) Figure 2: An Illustration of main results. See §5 for detailed experiment settings and the interpretation. Table 1: A comparison of subspace estimation error with existing sparse PCA procedures. The error is measured by D(U∗, bU) defined in (6). Standard deviations are provided in the parentheses. Procedure D(U∗, bU) for Setting (i) D(U∗, bU) for Setting (ii) Our Procedure 0.32 (0.0067) 0.064 (0.00016) Convex Relaxation [13] 1.62 (0.0398) 0.57 (0.021) TPower [11] + Deflation Method [24] 1.15 (0.1336) 0.01 (0.00042) GPower [10] + Deflation Method [24] 1.84 (0.0226) 1.75 (0.029) PathSPCA [8] + Deflation Method [24] 2.12 (0.0226) 2.10 (0.018) (i): d = 200, s = 10, k = 5, n = 50, Σ’s eigenvalues are {100, 100, 100, 100, 4, 1, . . . , 1}; (ii): The same as (i) except n = 100, Σ’s eigenvalues are {300, 240, 180, 120, 60, 1, . . . , 1}. 5 Numerical Results Figure 2 illustrates the main theoretical results. For (a)-(c), we set d=200, s∗=10, k=5, n=100, and Σ’s eigenvalues are {100, 100, 100, 100, 10, 1, . . . , 1}. In detail, (a) illustrates the 1/ √ t decay of optimization error at the initialization stage; (b) illustrates the decay of the total estimation error (in log-scale) at the main stage; (c) illustrates the basin of attraction phenomenon, as well as the geometric decay of optimization error (in log-scale) of sparse orthogonal iteration pursuit as characterized in §4. For (d) and (e), the eigenstructure is the same, while d, n and s∗take multiple values. They show that the theoretical p s∗log d/n statistical rate of our estimator is tight in practice. In Table 1, we compare the subspace error of our procedure with existing methods, where all except our procedure and convex relaxation [13] leverage the deflation method [24] for subspace estimation with k > 1. We consider two settings: Setting (i) is more challenging than setting (ii), since the top k eigenvalues of Σ are not distinct, the eigengap is small and the sample size is smaller. Our procedure significantly outperforms other existing methods on subspace recovery in both settings. Acknowledgement: This research is partially supported by the grants NSF IIS1408910, NSF IIS1332109, NIH R01MH102339, NIH R01GM083084, and NIH R01HG06841. References [1] I. Johnstone, A. Lu. On consistency and sparsity for principal components analysis in high dimensions, Journal of the American Statistical Association 2009;104:682–693. 8 [2] D. Paul. Asymptotics of sample eigenstructure for a large dimensional spiked covariance model, Statistica Sinica 2007;17:1617. [3] B. Nadler. Finite sample approximation results for principal component analysis: A matrix perturbation approach, The Annals of Statistics 2008:2791–2817. [4] I. Jolliffe, N. Trendafilov, M. Uddin. A modified principal component technique based on the Lasso, Journal of Computational and Graphical Statistics 2003;12:531–547. [5] A. d’Aspremont, L. E. Ghaoui, M. I. Jordan, G. R. Lanckriet. A Direct Formulation for Sparse PCA Using Semidefinite Programming, SIAM Review 2007:434–448. [6] H. Zou, T. Hastie, R. Tibshirani. Sparse principal component analysis, Journal of computational and graphical statistics 2006;15:265–286. [7] H. Shen, J. Huang. Sparse principal component analysis via regularized low rank matrix approximation, Journal of Multivariate Analysis 2008;99:1015–1034. [8] A. d’Aspremont, F. Bach, L. Ghaoui. Optimal solutions for sparse principal component analysis, The Journal of Machine Learning Research 2008;9:1269–1294. [9] D. Witten, R. Tibshirani, T. Hastie. A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis, Biostatistics 2009;10:515–534. [10] M. Journ´ee, Y. Nesterov, P. Richt´arik, R. Sepulchre. Generalized power method for sparse principal component analysis, The Journal of Machine Learning Research 2010;11:517–553. [11] X.-T. Yuan, T. Zhang. Truncated power method for sparse eigenvalue problems, The Journal of Machine Learning Research 2013;14:899–925. [12] Z. Ma. Sparse principal component analysis and iterative thresholding, The Annals of Statistics 2013;41. [13] V. Q. Vu, J. Cho, J. Lei, K. Rohe. Fantope projection and selection: A near-optimal convex relaxation of sparse PCA, in Advances in Neural Information Processing Systems:2670–2678 2013. [14] A. Amini, M. Wainwright. High-dimensional analysis of semidefinite relaxations for sparse principal components, The Annals of Statistics 2009;37:2877–2921. [15] V. Q. Vu, J. Lei. Minimax Rates of Estimation for Sparse PCA in High Dimensions, in International Conference on Artificial Intelligence and Statistics:1278–1286 2012. [16] A. Birnbaum, I. M. Johnstone, B. Nadler, D. Paul, others. Minimax bounds for sparse PCA with noisy high-dimensional data, The Annals of Statistics 2013;41:1055–1084. [17] V. Q. Vu, J. Lei. Minimax sparse principal subspace estimation in high dimensions, The Annals of Statistics 2013;41:2905–2947. [18] T. T. Cai, Z. Ma, Y. Wu, others. Sparse PCA: Optimal rates and adaptive estimation, The Annals of Statistics 2013;41:3074–3110. [19] Q. Berthet, P. Rigollet. Optimal detection of sparse principal components in high dimension, The Annals of Statistics 2013;41:1780–1815. [20] Q. Berthet, P. Rigollet. Complexity Theoretic Lower Bounds for Sparse Principal Component Detection, in COLT:1046-1066 2013. [21] J. Lei, V. Q. Vu. Sparsistency and Agnostic Inference in Sparse PCA, arXiv:1401.6978 2014. [22] B. Moghaddam, Y. Weiss, S. Avidan. Spectral bounds for sparse PCA: Exact and greedy algorithms, Advances in neural information processing systems 2006;18:915. [23] K. Ball. An elementary introduction to modern convex geometry, Flavors of geometry 1997;31:1–58. [24] L. Mackey. Deflation methods for sparse PCA, Advances in neural information processing systems 2009;21:1017–1024. [25] Z. Wang, H. Liu, T. Zhang. Optimal computational and statistical rates of convergence for sparse nonconvex learning problems, The Annals of Statistics 2014;42:2164–2201. [26] S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers, Foundations and Trends R⃝in Machine Learning 2011;3:1–122. [27] G. H. Golub, C. F. Van Loan. Matrix computations. Johns Hopkins University Press 2012. [28] R. Arora, A. Cotter, K. Livescu, N. Srebro. Stochastic optimization for PCA and PLS, in Communication, Control, and Computing (Allerton), 2012 50th Annual Allerton Conference on:861–868IEEE 2012. [29] Z. Wang, H. Lu, H. Liu. Nonconvex statistical optimization: Minimax-optimal Sparse PCA in polynomial time, arXiv:1408.5352 2014. [30] B. He, H. Liu, Z. Wang, X. Yuan. A Strictly Contractive Peaceman–Rachford Splitting Method for Convex Programming, SIAM Journal on Optimization 2014;24:1011–1040. [31] B. E. Engelhardt, M. Stephens. Analysis of population structure: a unifying framework and novel methods based on sparse factor analysis, PLoS genetics 2010;6:e1001117. 9
|
2014
|
179
|
5,268
|
Near-optimal Reinforcement Learning in Factored MDPs Ian Osband Stanford University iosband@stanford.edu Benjamin Van Roy Stanford University bvr@stanford.edu Abstract Any reinforcement learning algorithm that applies to all Markov decision processes (MDPs) will suffer Ω( Ô SAT) regret on some MDP, where T is the elapsed time and S and A are the cardinalities of the state and action spaces. This implies T = Ω(SA) time to guarantee a near-optimal policy. In many settings of practical interest, due to the curse of dimensionality, S and A can be so enormous that this learning time is unacceptable. We establish that, if the system is known to be a factored MDP, it is possible to achieve regret that scales polynomially in the number of parameters encoding the factored MDP, which may be exponentially smaller than S or A. We provide two algorithms that satisfy near-optimal regret bounds in this context: posterior sampling reinforcement learning (PSRL) and an upper confidence bound algorithm (UCRL-Factored). 1 Introduction We consider a reinforcement learning agent that takes sequential actions within an uncertain environment with an aim to maximize cumulative reward [1]. We model the environment as a Markov decision process (MDP) whose dynamics are not fully known to the agent. The agent can learn to improve future performance by exploring poorly-understood states and actions, but might improve its short-term rewards through a policy which exploits its existing knowledge. Efficient reinforcement learning balances exploration with exploitation to earn high cumulative reward. The vast majority of efficient reinforcement learning has focused upon the tabula rasa setting, where little prior knowledge is available about the environment beyond its state and action spaces. In this setting several algorithms have been designed to attain sample complexity polynomial in the number of states S and actions A [2, 3]. Stronger bounds on regret, the difference between an agent’s cumulative reward and that of the optimal controller, have also been developed. The strongest results of this kind establish ˜O(S Ô AT) regret for particular algorithms [4, 5, 6] which is close to the lower bound Ω( Ô SAT) [4]. However, in many setting of interest, due to the curse of dimensionality, S and A can be so enormous that even this level of regret is unacceptable. In many practical problems the agent will have some prior understanding of the environment beyond tabula rasa. For example, in a large production line with m machines in sequence each with K possible states, we may know that over a single time-step each machine can only be influenced by its direct neighbors. Such simple observations can reduce the dimensionality of the learning problem exponentially, but cannot easily be exploited by a tabula rasa algorithm. Factored MDPs (FMDPs) [7], whose transitions can be represented by a dynamic Bayesian network (DBN) [8], are one effective way to represent these structured MDPs compactly. 1 Several algorithms have been developed that exploit the known DBN structure to achieve sample complexity polynomial in the parameters of the FMDP, which may be exponentially smaller than S or A [9, 10, 11]. However, these polynomial bounds include several high order terms. We present two algorithms, UCRL-Factored and PSRL, with the first near-optimal regret bounds for factored MDPs. UCRL-Factored is an optimistic algorithm that modifies the confidence sets of UCRL2 [4] to take advantage of the network structure. PSRL is motivated by the old heuristic of Thompson sampling [12] and has been previously shown to be efficient in non-factored MDPs [13, 6]. These algorithms are descibed fully in Section 6. Both algorithms make use of approximate FMDP planner in internal steps. However, even where an FMDP can be represented concisely, solving for the optimal policy may take exponentially long in the most general case [14]. Our focus in this paper is upon the statistical aspect of the learning problem and like earlier discussions we do not specify which computational methods are used [10]. Our results serve as a reduction of the reinforcement learning problem to finding an approximate solution for a given FMDP. In many cases of interest, effective approximate planning methods for FMDPs do exist. Investigating and extending these methods are an ongoing subject of research [15, 16, 17, 18]. 2 Problem formulation We consider the problem of learning to optimize a random finite horizon MDP M = (S, A, RM, P M, ·, fl) in repeated finite episodes of interaction. S is the state space, A is the action space, RM(s, a) is the reward distibution over R in state s with action a, P M(·|s, a) is the transition probability over S from state s with action a, · is the time horizon, and flthe initial state distribution. We define the MDP and all other random variables we will consider with respect to a probability space (Ω, F, P). A deterministic policy µ is a function mapping each state s œ S and i = 1, . . . , · to an action a œ A. For each MDP M = (S, A, RM, P M, ·, fl) and policy µ, we define a value function V M µ,i(s) := EM,µ S U · ÿ j=i R M(sj, aj) ---si = s T V , where R M(s, a) denotes the expected reward realized when action a is selected while in state s, and the subscripts of the expectation operator indicate that aj = µ(sj, j), and sj+1 ≥P M(·|sj, aj) for j = i, . . . , ·. A policy µ is optimal for the MDP M if V M µ,i(s) = maxµÕ V M µÕ,i(s) for all s œ S and i = 1, . . . , ·. We will associate with each MDP M a policy µM that is optimal for M. The reinforcement learning agent interacts with the MDP over episodes that begin at times tk = (k ≠1)· + 1, k = 1, 2, . . .. At each time t, the agent selects an action at, observes a scalar reward rt, and then transitions to st+1. Let Ht = (s1, a1, r1, . . . , st≠1, at≠1, rt≠1) denote the history of observations made prior to time t. A reinforcement learning algorithm is a deterministic sequence {fik|k = 1, 2, . . .} of functions, each mapping Htk to a probability distribution fik(Htk) over policies which the agent will employ during the kth episode. We define the regret incurred by a reinforcement learning algorithm fiup to time T to be: Regret(T, fi, M ú) := ÁT/·Ë ÿ k=1 ∆k, where ∆k denotes regret over the kth episode, defined with respect to the MDP M ú by ∆k := ÿ S fl(s)(V M ú µú,1(s) ≠V M ú µk,1(s)) with µú = µM ú and µk ≥fik(Htk). Note that regret is not deterministic since it can depend on the random MDP M ú, the algorithm’s internal random sampling and, through the history Htk, on previous random transitions and random rewards. We will assess and compare algorithm performance in terms of regret and its expectation. 2 3 Factored MDPs Intuitively a factored MDP is an MDP whose rewards and transitions exhibit some conditional independence structure. To formalize this definition we must introduce some more notation common to the literature [11]. Definition 1 (Scope operation for factored sets X = X1 ◊.. ◊Xn). For any subset of indices Z ™{1, 2, .., n} let us define the scope set X[Z] := o iœZ Xi. Further, for any x œ X define the scope variable x[Z] œ X[Z] to be the value of the variables xi œ Xi with indices i œ Z. For singleton sets Z we will write x[i] for x[{i}] in the natural way. Let PX,Y be the set of functions mapping elements of a finite set X to probability mass functions over a finite set Y. PC,‡ X,R will denote the set of functions mapping elements of a finite set X to ‡-sub-Gaussian probability measures over (R, B(R)) with mean bounded in [0, C]. For reinforcement learning we will write X for S ◊A and consider factored reward and factored transition functions which are drawn from within these families. Definition 2 ( Factored reward functions R œ R ™PC,‡ X,R). The reward function class R is factored over S ◊A = X = X1 ◊.. ◊Xn with scopes Z1, ..Zl if and only if, for all R œ R, x œ X there exist functions {Ri œ PC,‡ X[Zi],R}l i=1 such that, E[r] = l ÿ i=1 E # ri $ for r ≥R(x) is equal to ql i=1 ri with each ri ≥Ri(x[Zi]) and individually observed. Definition 3 ( Factored transition functions P œ P ™PX,S ). The transition function class P is factored over S ◊A = X = X1 ◊.. ◊Xn and S = S1 ◊.. ◊Sm with scopes Z1, ..Zm if and only if, for all P œ P, x œ X, s œ S there exist some {Pi œ PX[Zi],Si}m i=1 such that, P(s|x) = m Ÿ i=1 Pi 3 s[i] ---- x[Zi] 4 A factored MDP (FMDP) is then defined to be an MDP with both factored rewards and factored transitions. Writing X = S ◊A a FMDP is fully characterized by the tuple M = ! {Si}m i=1; {Xi}n i=1; {ZR i }l i=1; {Ri}l i=1; {ZP i }m i=1; {Pi}m i=1; ·; fl " , where ZR i and ZP i are the scopes for the reward and transition functions respectively in {1, .., n} for Xi. We assume that the size of all scopes |Zi| Æ ’ π n and factors |Xi| Æ K so that the domains of Ri and Pi are of size at most K’. 4 Results Our first result shows that we can bound the expected regret of PSRL. Theorem 1 (Expected regret for PSRL in factored MDPs). Let M ú be factored with graph structure G = ! {Si}m i=1; {Xi}n i=1; {ZR i }l i=1; {ZP i }m i=1; · " . If „ is the distribution of M ú and Ψ is the span of the optimal value function then we can bound the regret of PSRL: E # Regret(T, fiPS · , M ú) $ Æ l ÿ i=1 ; 5·C|X[ZR i ]| + 12‡ Ò |X[ZR i ]|T log ! 4l|X[ZR i ]|kT "< + 2 Ô T +4 + E[Ψ] 3 1 + 4 T ≠4 4 m ÿ j=1 ; 5·|X[ZP j ]| + 12 Ò |X[ZP j ]||Sj|T log ! 4m|X[ZP j ]|kT "< (1) We have a similar result for UCRL-Factored that holds with high probability. 3 Theorem 2 (High probability regret for UCRL-Factored in factored MDPs). Let M ú be factored with graph structure G = ! {Si}m i=1; {Xi}n i=1; {ZR i }l i=1; {ZP i }m i=1; · " . If D is the diameter of M ú, then for any M ú can bound the regret of UCRL-Factored: Regret(T, fiUC · , M ú)Æ l ÿ i=1 ; 5·C|X[ZR i ]| + 12‡ Ò |X[ZR i ]|T log ! 12l|X[ZR i ]|kT/” "< + 2 Ô T +CD 2T log(6/”) + CD m ÿ j=1 ; 5·|X[ZP j ]| + 12 Ò |X[ZP j ]||Sj|T log ! 12m|X[ZP j ]|kT/” "< (2) with probability at least 1 ≠” Both algorithms give bounds ˜O 1 Ξ qm j=1 Ò |X[ZP j ]||Sj|T 2 where Ξ is a measure of MDP connectedness: expected span E[Ψ] for PSRL and scaled diameter CD for UCRL-Factored. The span of an MDP is the maximum difference in value of any two states under the optimal policy Ψ(M ú) := maxs,sÕœS{V M ú µú,1(s)≠V M ú µú,1(sÕ)}. The diameter of an MDP is the maximum number of expected timesteps to get between any two states D(M ú) = maxs”=sÕ minµ T µ sæsÕ. PSRL’s bounds are tighter since Ψ(M) Æ CD(M) and may be exponentially smaller. However, UCRL-Factored has stronger probabilistic guarantees than PSRL since its bounds hold with high probability for any MDP M ú not just in expectation. There is an optimistic algorithm REGAL [5] which formally replaces the UCRL2 D with Ψ and retains the high probability guarantees. An analogous extension to REGAL-Factored is possible, however, no practical implementation of that algorithm exists even with an FMDP planner. The algebra in Theorems 1 and 2 can be overwhelming. For clarity, we present a symmetric problem instance for which we can produce a cleaner single-term upper bound. Let Q be shorthand for the simple graph structure with l + 1 = m, C = ‡ = 1, |Si| = |Xi| = K and |ZR i | = |ZP j | = ’ for i = 1, .., l and j = 1, .., m, we will write J = K’. Corollary 1 (Clean bounds for PSRL in a symmetric problem). If „ is the distribution of M ú with structure Q then we can bound the regret of PSRL: E # Regret(T, fiPS · , M ú) $ Æ 15m· JKT log(2mJT) (3) Corollary 2 (Clean bounds for UCRL-Factored in a symmetric problem). For any MDP M ú with structure Q we can bound the regret of UCRL-Factored: Regret(T, fiUC · , M ú) Æ 15m· JKT log(12mJT/”) (4) with probability at least 1 ≠”. Both algorithms satisfy bounds of ˜O(·m Ô JKT) which is exponentially tighter than can be obtained by any Q-naive algorithm. For a factored MDP with m independent components with S states and A actions the bound ˜O(mS Ô AT) is close to the lower bound Ω(m Ô SAT) and so the bound is near optimal. The corollaries follow directly from Theorems 1 and 2 as shown in Appendix B. 5 Confidence sets Our analysis will rely upon the construction of confidence sets based around the empirical estimates for the underlying reward and transition functions. The confidence sets are constructed to contain the true MDP with high probability. This technique is common to the literature, but we will exploit the additional graph structure G to sharpen the bounds. Consider a family of functions F ™MX,(Y,ΣY) which takes x œ X to a probability distribution over (Y, ΣY). We will write MX,Y unless we wish to stress a particular ‡-algebra. Definition 4 (Set widths). Let X be a finite set, and let (Y, ΣY) be a measurable space. The width of a set F œ MX,Y at x œ X with respect to a norm Î · Î is wF(x) := sup f,fœF Î(f ≠f)(x)Î 4 Our confidence set sequence {Ft ™F : t œ N} is initialized with a set F. We adapt our confidence set to the observations yt œ Y which are drawn from the true function f ú œ F at measurement points xt œ X so that yt ≥f ú(xt). Each confidence set is then centered around an empirical estimate ˆft œ MX,Y at time t, defined by ˆft(x) = 1 nt(x) ÿ ·<t:x· =x ”y· , where nt(x) is the number of time x appears in (x1, .., xt≠1) and ”yt is the probability mass function over Y that assigns all probability to the outcome yt. Our sequence of confidence sets depends on our choice of norm Î · Î and a non-decreasing sequence {dt : t œ N}. For each t, the confidence set is defined by: Ft = Ft(Î · Î, xt≠1 1 , dt) := I f œ F ---- Î(f ≠ˆft)(xi)Î Æ Û dt nt(xi) ’i = 1, .., t ≠1 J . Where xt≠1 1 is shorthand for (x1, .., xt≠1) and we interpret nt(xi) = 0 as a null constraint. The following result shows that we can bound the sum of confidence widths through time. Theorem 3 (Bounding the sum of widths). For all finite sets X, measurable spaces (Y, ΣY), function classes F ™MX,Y with uniformly bounded widths wF(x) Æ CF ’x œ X and non-decreasing sequences {dt : t œ N}: L ÿ k=1 · ÿ i=1 wFk(xtk+i) Æ 4 ! ·CF|X| + 1 " + 4 2dT |X|T (5) Proof. The proof follows from elementary counting arguments on nt(x) and the pigeonhole principle. A full derivation is given in Appendix A. 6 Algorithms With our notation established, we are now able to introduce our algorithms for efficient learning in Factored MDPs. PSRL and UCRL-Factored proceed in episodes of fixed policies. At the start of the kth episode they produce a candidate MDP Mk and then proceed with the policy which is optimal for Mk. In PSRL, Mk is generated by a sample from the posterior for M ú, whereas UCRL-Factored chooses Mk optimistically from the confidence set Mk. Both algorithms require prior knowledge of the graphical structure G and an approximate planner for FMDPs. We will write Γ(M, ‘) for a planner which returns ‘-optimal policy for M. We will write ˜Γ(M, ‘) for a planner which returns an ‘-optimal policy for most optimistic realization from a family of MDPs M. Given Γ it is possible to obtain ˜Γ through extended value iteration, although this might become computationally intractable [4]. PSRL remains identical to earlier treatment [13, 6] provided G is encoded in the prior „. UCRL-Factored is a modification to UCRL2 that can exploit the graph and episodic structure of . We write Ri t(dRi t ) and Pj t (dPj t ) as shorthand for these confidence sets Ri t(|E[·]|, xt≠1 1 [ZR i ], dRi t ) and Pi t(Î · Î1, xt≠1 1 [ZP j ], dPj t ) generated from initial sets Ri 1 = PC,‡ X[ZR i ],R and Pj 1 = PX[ZP j ],Sj. We should note that UCRL2 was designed to obtain regret bounds even in MDPs without episodic reset. This is accomplished by imposing artificial episodes which end whenever the number of visits to a state-action pair is doubled [4]. It is simple to extend UCRLFactored’s guarantees to this setting using this same strategy. This will not work for PSRL since our current analysis requires that the episode length is independent of the sampled MDP. Nevertheless, there has been good empirical performance using this method for MDPs without episodic reset in simulation [6]. 5 Algorithm 1 PSRL (Posterior Sampling) 1: Input: Prior „ encoding G, t = 1 2: for episodes k = 1, 2, .. do 3: sample Mk ≥„(·|Ht) 4: compute µk = Γ(Mk, ·/k) 5: for timesteps j = 1, .., · do 6: sample and apply at = µk(st, j) 7: observe rt and sm t+1 8: t = t + 1 9: end for 10: end for Algorithm 2 UCRL-Factored (Optimism) 1: Input: Graph structure G, confidence ”, t = 1 2: for episodes k = 1, 2, .. do 3: dRi t = 4‡2 log ! 4l|X[ZR i ]|k/”" for i = 1, .., l 4: d Pj t = 4|Sj| log ! 4m|X[ZP j ]|k/”" for j = 1, .., m 5: Mk = {M |G, Ri œ Ri t(dRi t ), Pj œ Pj t (d Pj t ) ’i, j} 6: compute µk = ˜Γ(Mk, ·/k) 7: for timesteps u = 1, .., · do 8: sample and apply at = µk(st, u) 9: observe r1 t , .., rl t and s1 t+1, .., sm t+1 10: t = t + 1 11: end for 12: end for 7 Analysis For our common analysis of PSRL and UCRL-Factored we will let ˜ Mk refer generally to either the sampled MDP used in PSRL or the optimistic MDP chosen from Mk with associated policy ˜µk). We introduce the Bellman operator T M µ , which for any MDP M = (S, A, RM, P M, ·, fl), stationary policy µ : S æ A and value function V : S æ R, is defined by T M µ V (s) := R M(s, µ(s)) + ÿ sÕœS P M(sÕ|s, µ(s))V (sÕ). This returns the expected value of state s where we follow the policy µ under the laws of M, for one time step. We will streamline our discussion of P M, RM, V M µ,i and T M µ by simply writing ú in place of M ú or µú and k in place of ˜ Mk or ˜µk where appropriate; for example V ú k,i := V M ú ˜µk,i. We will also write xk,i := (stk+i, µk(stk+i)). We now break down the regret by adding and subtracting the imagined near optimal reward of policy ˜µK, which is known to the agent. For clarity of analysis we consider only the case of fl(sÕ) = 1{sÕ = s} but this changes nothing for our consideration of finite S. ∆k = V ú ú,1(s) ≠V ú k,1(s) = 3 V k k,1(s) ≠V ú k,1(s) 4 + 3 V ú ú,1(s) ≠V k k,1(s) 4 (6) V ú ú,1 ≠V k k,1 relates the optimal rewards of the MDP M ú to those near optimal for ˜ Mk. We can bound this difference by the planning accuracy 1/k for PSRL in expectation, since M ú and Mk are equal in law, and for UCRL-Factored in high probability by optimism. We decompose the first term through repeated application of dynamic programming: ! V k k,1 ≠V ú k,1 " (stk+1) = · ÿ i=1 ! T k k,i ≠T ú k,i " V k k,i+1(stk+i) + · ÿ i=1 dtk+1. (7) Where dtk+i := q sœS Ó P ú(s|xk,i)(V ú k,i+1 ≠V k k,i+1)(s) Ô ≠(V ú k,i+1 ≠V k k,i+1)(stk+i) is a martingale difference bounded by Ψk, the span of V k k,i. For UCRL-Factored we can use optimism to say that Ψk Æ CD [4] and apply the Azuma-Hoeffding inequality to say that: P A m ÿ k=1 · ÿ i=1 dtk+i > CD 2T log(2/”) B Æ ” (8) The remaining term is the one step Bellman error of the imagined MDP ˜ Mk. Crucially this term only depends on states and actions xk,i which are actually observed. We can now use 6 the H¨older inequality to bound · ÿ i=1 ! T k k,i ≠T ú k,i " V k k,i+1(stk+i) Æ · ÿ i=1 |R k(xk,i)≠R ú(xk,i)|+1 2ΨkÎP k(·|xk,i)≠P ú(·|xk,i)Î1 (9) 7.1 Factorization decomposition We aim to exploit the graphical structure G to create more efficient confidence sets Mk. It is clear from (9) that we may upper bound the deviations of R ú, R k factor-by-factor using the triangle inequality. Our next result, Lemma 1, shows we can also do this for the transition functions P ú and P k. This is the key result that allows us to build confidence sets around each factor P ú j rather than P ú as a whole. Lemma 1 (Bounding factored deviations). Let the transition function class P ™PX,S be factored over X = X1 ◊.. ◊Xn and S = S1 ◊.. ◊Sm with scopes Z1, ..Zm. Then, for any P, ˜P œ P we may bound their L1 distance by the sum of the differences of their factorizations: ÎP(x) ≠˜P(x)Î1 Æ m ÿ i=1 ÎPi(x[Zi]) ≠˜Pi(x[Zi])Î1 Proof. We begin with the simple claim that for any –1, –2, —1, —2 œ (0, 1]: |–1–2 ≠—1—2| = –2 ----–1 ≠—1—2 –2 ---Æ –2 3 |–1 ≠—1| + ----—1 ≠—1—2 –2 ---4 Æ –2 |–1 ≠—1| + —1 |–2 ≠—2| This result also holds for any –1, –2, —1, —2 œ [0, 1], where 0 can be verified case by case. We now consider the probability distributions p, ˜p over {1, .., d1} and q, ˜q over {1, .., d2}. We let Q = pqT , ˜Q = ˜p˜qT be the joint probability distribution over {1, .., d1} ◊{1, .., d2}. Using the claim above we bound the L1 deviation ÎQ ≠˜QÎ1 by the deviations of their factors: ÎQ ≠˜QÎ1 = d1 ÿ i=1 d2 ÿ j=1 |piqj ≠˜pi˜qj| Æ d1 ÿ i=1 d2 ÿ j=1 qj|pi ≠˜pi| + ˜pi|qj ≠˜qj| = Îp ≠˜pÎ1 + Îq ≠˜qÎ1 We conclude the proof by applying this m times to the factored transitions P and ˜P. 7.2 Concentration guarantees for Mk We now want to show that the true MDP lies within Mk with high probability. Note that posterior sampling will also allow us to then say that the sampled Mk is within Mk with high probability too. In order to show this, we first present a concentration result for the L1 deviation of empirical probabilities. Lemma 2 (L1 bounds for the empirical transition function). For all finite sets X, finite sets Y, function classes P ™PX,Y then for any x œ X, ‘ > 0 the deviation the true distribution P ú to the empirical estimate after t samples ˆPt is bounded: P 1 ÎP ú(x) ≠ˆPt(x)Î1 Ø ‘ 2 Æ exp 3 |Y| log(2) ≠nt(x)‘2 2 4 7 Proof. This is a relaxation of the result proved by Weissman [19]. Lemma 2 ensures that for any x œ X P(ÎP ú j (x) ≠ˆPjt(x)Î1 Ø Ò 2|Sj| nt(x) log ! 2 ”Õ " ) Æ ”Õ. We then define dPj tk = 2|Si| log(2/”Õ k,j) with ”Õ k,j = ”/(2m|X[ZP j ]|k2). Now using a union bound we conclude P(P ú j œ Pj t (dPj tk ) ’k œ N, j = 1, .., m) Ø 1 ≠”. Lemma 3 (Tail bounds for sub ‡-gaussian random variables). If {‘i} are all independent and sub ‡-gaussian then ’— Ø 0: P A 1 n| n ÿ i=1 ‘i| > — B Æ exp 3 log(2) ≠n—2 2‡2 4 A similar argument now ensures that P 1 R ú i œ Ri t(dRi tk ) ’k œ N, i = 1, .., l 2 Ø 1 ≠”, and so P 3 M ú œ Mk ’k œ N 4 Ø 1 ≠2” (10) 7.3 Regret bounds We now have all the necessary intermediate results to complete our proof. We begin with the analysis of PSRL. Using equation (10) and the fact that M ú, Mk are equal in law by posterior sampling, we can say that P(M ú, Mk œ Mk’k œ N) Ø 1 ≠4”. The contributions from regret in planning function Γ are bounded by qm k=1 ·/k Æ 2 Ô T. From here we take equation (9), Lemma 1 and Theorem 3 to say that for any ” > 0: E # Regret(T, fiPS · , M ú) $ Æ 4”T + 2 Ô T + l ÿ i=1 ; 4(·C|X[ZR i ]| + 1) + 4 Ò 2dRi T |X[ZR i ]|T < + sup k=1,..,L ! E[Ψk|Mk, M ú œ Mk] " ◊ m ÿ j=1 ; 4(·|X[ZP j ]| + 1) + 4 Ò 2dPj T |X[ZP j ]|T < Let A = {M ú, Mk œ Mk}, since Ψk Ø 0 and by posterior sampling E[Ψk] = E[Ψ] for all k: E[Ψk|A] Æ P(A)≠1E[Ψ] Æ 3 1 ≠4” k2 4≠1 E[Ψ] = 3 1 + 4” k2 ≠4” 4 E[Ψ] Æ 3 1 + 4” 1 ≠4” 4 E[Ψ]. Plugging in dRi T and dPj T and setting ” = 1/T completes the proof of Theorem 1. The analysis of UCRL-Factored and Theorem 2 follows similarly from (8) and (10). Corollaries 1 and 2 follow from substituting the structure Q and upper bounding the constant and logarithmic terms. This is presented in detail in Appendix B. 8 Conclusion We present the first algorithms with near-optimal regret bounds in factored MDPs. Many practical problems for reinforcement learning will have extremely large state and action spaces, this allows us to obtain meaningful performance guarantees even in previously intractably large systems. However, our analysis leaves several important questions unaddressed. First, we assume access to an approximate FMDP planner that may be computationally prohibitive in practice. Second, we assume that the graph structure is known a priori but there are other algorithms that seek to learn this from experience [20, 21]. Finally, we might consider dimensionality reduction in large MDPs more generally, where either the rewards, transitions or optimal value function are known to belong in some function class F to obtain bounds that depend on the dimensionality of F. Acknowledgments Osband is supported by Stanford Graduate Fellowships courtesy of PACCAR inc. This work was supported in part by Award CMMI-0968707 from the National Science Foundation. 8 References [1] Apostolos Burnetas and Michael Katehakis. Optimal adaptive policies for Markov decision processes. Mathematics of Operations Research, 22(1):222–255, 1997. [2] Michael Kearns and Satinder Singh. Near-optimal reinforcement learning in polynomial time. Machine Learning, 49(2-3):209–232, 2002. [3] Ronen Brafman and Moshe Tennenholtz. R-max-a general polynomial time algorithm for near-optimal reinforcement learning. The Journal of Machine Learning Research, 3:213–231, 2003. [4] Thomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal regret bounds for reinforcement learning. The Journal of Machine Learning Research, 99:1563–1600, 2010. [5] Peter Bartlett and Ambuj Tewari. Regal: A regularization based algorithm for reinforcement learning in weakly communicating MDPs. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, pages 35–42. AUAI Press, 2009. [6] Ian Osband, Daniel Russo, and Benjamin Van Roy. (More) Efficient Reinforcement Learning via Posterior Sampling. Advances in Neural Information Processing Systems, 2013. [7] Craig Boutilier, Richard Dearden, and Mois´es Goldszmidt. Stochastic dynamic programming with factored representations. Artificial Intelligence, 121(1):49–107, 2000. [8] Zoubin Ghahramani. Learning dynamic bayesian networks. In Adaptive processing of sequences and data structures, pages 168–197. Springer, 1998. [9] Alexander Strehl. Model-based reinforcement learning in factored-state MDPs. In Approximate Dynamic Programming and Reinforcement Learning, 2007. ADPRL 2007. IEEE International Symposium on, pages 103–110. IEEE, 2007. [10] Michael Kearns and Daphne Koller. Efficient reinforcement learning in factored MDPs. In IJCAI, volume 16, pages 740–747, 1999. [11] Istv´an Szita and Andr´as L˝orincz. Optimistic initialization and greediness lead to polynomial time learning in factored MDPs. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 1001–1008. ACM, 2009. [12] William Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3/4):285–294, 1933. [13] Malcom Strens. A Bayesian framework for reinforcement learning. In Proceedings of the 17th International Conference on Machine Learning, pages 943–950, 2000. [14] Carlos Guestrin, Daphne Koller, Ronald Parr, and Shobha Venkataraman. Efficient solution algorithms for factored MDPs. J. Artif. Intell. Res.(JAIR), 19:399–468, 2003. [15] Daphne Koller and Ronald Parr. Policy iteration for factored MDPs. In Proceedings of the Sixteenth conference on Uncertainty in artificial intelligence, pages 326–334. Morgan Kaufmann Publishers Inc., 2000. [16] Carlos Guestrin, Daphne Koller, and Ronald Parr. Max-norm projections for factored MDPs. In IJCAI, volume 1, pages 673–682, 2001. [17] Karina Valdivia Delgado, Scott Sanner, and Leliane Nunes De Barros. Efficient solutions to factored MDPs with imprecise transition probabilities. Artificial Intelligence, 175(9):1498– 1527, 2011. [18] Scott Sanner and Craig Boutilier. Approximate linear programming for first-order MDPs. arXiv preprint arXiv:1207.1415, 2012. [19] Tsachy Weissman, Erik Ordentlich, Gadiel Seroussi, Sergio Verdu, and Marcelo J Weinberger. Inequalities for the L1 deviation of the empirical distribution. Hewlett-Packard Labs, Tech. Rep, 2003. [20] Alexander Strehl, Carlos Diuk, and Michael Littman. Efficient structure learning in factoredstate MDPs. In AAAI, volume 7, pages 645–650, 2007. [21] Carlos Diuk, Lihong Li, and Bethany R Leffler. The adaptive k-meteorologists problem and its application to structure learning and feature selection in reinforcement learning. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 249–256. ACM, 2009. 9
|
2014
|
18
|
5,269
|
Learning to Search in Branch-and-Bound Algorithms⇤ He He Hal Daum´e III Department of Computer Science University of Maryland College Park, MD 20740 {hhe,hal}@cs.umd.edu Jason Eisner Department of Computer Science Johns Hopkins University Baltimore, MD 21218 jason@cs.jhu.edu Abstract Branch-and-bound is a widely used method in combinatorial optimization, including mixed integer programming, structured prediction and MAP inference. While most work has been focused on developing problem-specific techniques, little is known about how to systematically design the node searching strategy on a branch-and-bound tree. We address the key challenge of learning an adaptive node searching order for any class of problem solvable by branch-and-bound. Our strategies are learned by imitation learning. We apply our algorithm to linear programming based branch-and-bound for solving mixed integer programs (MIP). We compare our method with one of the fastest open-source solvers, SCIP; and a very efficient commercial solver, Gurobi. We demonstrate that our approach achieves better solutions faster on four MIP libraries. 1 Introduction Branch-and-bound (B&B) [1] is a systematic enumerative method for global optimization of nonconvex and combinatorial problems. In the machine learning community, B&B has been used as an inference tool in MAP estimation [2, 3]. In applied domains, it has been applied to the “inference” stage of structured prediction problems (e.g., dependency parsing [4, 5], scene understanding [6], ancestral sequence reconstruction [7]). B&B recursively divides the feasible set of a problem into disjoint subsets, organized in a tree structure, where each node represents a subproblem that searches only the subset at that node. If computing bounds on a subproblem does not rule out the possibility that its subset contains the optimal solution, the subset can be further partitioned (“branched”) as needed. A crucial question in B&B is how to specify the order in which nodes are considered. An effective node ordering strategy guides the search to promising areas in the tree and improves the chance of quickly finding a good incumbent solution, which can be used to rule out other nodes. Unfortunately, no theoretically guaranteed general solution for node ordering is currently known. Instead of designing node ordering heuristics manually for each problem type, we propose to speed up B&B search by automatically learning search heuristics that are adapted to a family of problems. • Non-problem-dependent learning. While our approach learns problem-specific policies, it can be applied to any family of problems solvable by the B&B framework. We use imitation learning to automatically learn the heuristics, free of the trial-and-error tuning and rule design by domain experts in most B&B algorithms. • Dynamic decision-making. Our decision-making process is adaptive on three scales. First, it learns different strategies for different problem types. Second, within a problem type, it can evaluate the hardness of a problem instance based on features describing the solving progress. Third, within a problem instance, it adapts the searching strategy to different levels of the B&B tree and makes decisions based on node-specific features. ⇤This material is based upon work supported by the National Science Foundation under Grant No. 0964681. 1 −13/2 +∞ −13/3 +∞ −16/3 +∞ −3 −3 −4 −4 –22/5 +∞ INF INF −3 −3 x = 5/2# y = 3/2 x = 5/3# y = 1 x = 5/3# y = 2 x = 1# y = 1 x = 1# y = 12/5 x = 1# y = 2 x = 0# y = 3 y ≤ 1 y ≥ 2 x ≤ 1 x ≤ 1 x ≥ 2 x ≥ 2 y ≤ 2 y ≥ 3 ub = +∞# lb = −16/3 ub = −3# lb = −22/5 ub = −4# lb = −4 ub = −3# lb = −16/3 node expansion order global lower and upper bound optimal node fathomed node min −2x − y# s.t. 3x − 5y ≤ 0# 3x + 5y ≤ 15# x ≥ 0, y ≥ 0# x, y ∈ Z training examples: −13/3 +∞ −16/3 +∞ < prune: −13/3 +∞ Figure 1: Using branch-and-bound to solve an integer linear programming minimization. • Easy incorporation of heuristics. Most hand-designed strategies handle only a few heuristics, and they set weights on different heuristics by domain knowledge or manual experimentation. In our model, multiple heuristics can be simply plugged in as state features for the policy, allowing a hybrid “heuristic” to be learned effectively. We assume that a small set of solved problems are given at training time and the problems to be solved at test time are of the same type. We learn a node selection policy and a node pruning policy from solving the training problems. The node selection policy repeatedly picks a node from the queue of all unexplored nodes, and the node pruning policy decides if the popped node is worth expanding. We formulate B&B search as a sequential decision-making process. We design a simple oracle that knows the optimal solution in advance and only expands nodes containing the optimal solution. We then use imitation learning to learn policies that mimic the oracle’s behavior without perfect information; these policies must even mimic how the oracle would act in states that the oracle would not itself reach, as such states may be encountered at test time. We apply our approach to linear programming (LP) based B&B for solving mixed integer linear programming (MILP) problems, and achieve better solutions faster on 4 MILP problem libraries than Gurobi, a recent fast commercial solver competitive with Cplex, and SCIP, one of the fastest open-source solvers [8]. 2 The Branch-and-Bound Framework: An Application in Mixed Integer Linear Programming Consider an optimization problem of minimizing f over a feasible set F, where F is usually discrete. B&B uses a divide and conquer strategy: F is recursively divided into its subsets F1, F2, . . . , Fp such that F = Sp i=1 Fi. The recursion tree is an enumeration tree of all feasible solutions, whose nodes are subproblems and edges are the partition conditions. Slightly abusing notation, we will use Fi to refer to both the subset and its corresponding B&B node from now on. A (convex) relaxation of each subproblem is solved to provide an upper/lower bound for that node and its descendants. We denote the upper and lower bound at node i by `ub(Fi) and `lb(Fi) respectively where `ub and `lb are bounding functions. A common setting where B&B is ubiquitously applied is MILP. A MILP optimization problem has linear objective and constraints, and also requires specified variables to be integer. We assume we are minimizing the objective function in MILP from now on. At each node, we drop the integrality constraints and solve its LP relaxation. We present a concrete example in Figure 1. The optimization problem is shown in the lower right corner. At node i, a local lower bound (shown in lower half of each circle) is found by the LP solver. A local upper bound (shown in upper part of the circle) is available if a feasible solution is found at this node. We automatically get an upper bound if the LP solution happens to be integer feasible, or we may obtain it by heuristics. B&B maintains a queue L of active nodes, starting with a single root node on it. At each step, we pop a node Fi from L using a node selection strategy, and compute its bounds. A node Fi 2 fathom? rank No nodes pop root (problem) prune? queue empty? Yes No Yes push children solution No Yes Algorithm 1 Policy Learning (⇡⇤ S, ⇡⇤ P ) ⇡(1) P = ⇡⇤ P , ⇡(1) S = ⇡⇤ S, DS = {}, DP = {} for k = 1 to N do for Q in problem set Q do D(Q) S , D(Q) P COLLECTEXAMPLE(Q, ⇡(k) P , ⇡(k) S ) DS DS [ D(Q) S , DP DP [ D(Q) P ⇡(k+1) S , ⇡(k+1) P train classifiers using DS and DP return Best ⇡(k) S , ⇡(k) P on dev set Figure 2: Our method at runtime (left) and the policy learning algorithm (right). Left: our policy-guided branch-and-bound search. Procedures in the rounded rectangles (shown in blue) are executed by policies. Right: the DAgger learning algorithm. We start by using oracle policies ⇡⇤ S and ⇡⇤ P to solve problems in Q and collect examples along oracle trajectories. In each iteration, we retrain our policies on all examples collected so far (training sets DD and DS), then collect additional examples by running the newly learned policies. The COLLECTEXAMPLE procedure is described in Algorithm 2. is fathomed (i.e., no further exploration in its subtree) if one of the following cases is true: (a) `lb(Fi) is larger than the current global upper bound, which means all solutions in its subtree can not possibly be better than the incumbent; (b) `lb(Fi) = `ub(Fi); at this point, B&B has found the best solution in the current subtree; (c) The subproblem is infeasible. In Figure 1, fathomed nodes are shown in double circles and infeasible nodes are labeled by “INF”. If a node is not fathomed, it is branched into children of Fi that are pushed onto L. Branching conditions are shown next to each edge in Figure 1. The algorithm terminates when L is empty or the gap between the global upper bound and lower bound achieves a specified tolerance level. In the example in Figure 1, we follow a DFS order. Starting from the root node, the blue arrows points to the next node popped from L to be branched. Updated global lower and upper bounds after a node expansion is shown on the board under each branched node. 3 Learning Control Policies for Branch-and-Bound A good search strategy should find a good incumbent solution early and identify non-promising nodes before they are expanded. However, naively applying a single heuristic through the whole process ignores the dynamic structure of the B&B tree. For example, DFS should only be used at nodes that promise to lead to a good feasible solution that may replace the incumbent. Best-boundfirst search can quickly discard unpromising nodes, but should not be used frequently at the top levels of the tree since the bound estimate is not accurate enough yet. Therefore, we propose to learn policies adaptive to different problem types and different solving stages. There are two goals in a B&B search: finding the optimal solution and proving its optimality. There is a trade-off between the two goals: we may be able to return the optimal solution faster if we do not invest the time to prove that all other solutions are worse. Thus, we will aim only to search for a “good” (possibly optimal) solution without a rigorous proof of optimality. This allows us to prune unpromising portions of the search tree more aggressively. In addition, obtaining a certificate of optimality is usually of secondary priority for practical purposes. We assume the branching strategy and the bounding functions are given. We guide search on the enumeration tree by two policies. Recall that B&B maintains a priority queue of all nodes to be expanded. The node selection policy determines the priorities used. Once the highest-priority node is popped, the node pruning policy decides whether to discard or expand it given the current progress of the solver. This process continues iteratively until the tree is empty or the gap reaches some specified tolerance. All other techniques used during usual branch-and-bound search can still be applied with our method. The process is shown in Figure 3. 3 Oracle. Imitation learning requires an oracle at training time to demonstrate the desired behavior. Our ideal oracle would expand nodes in an order that minimized the number of node expansions subject to finding the optimal solution. In real branch-and-bound systems, however, the optimal sequence of expanded nodes cannot be obtained without substantial computation. After all, the effect of expanding one node depends not only on local information such as the local bounds it obtains, but also on how many pruned nodes it may lead to and many other interacting strategies such as branching variable selection. Therefore, given our single goal of finding a good solution quickly, we design an oracle that finds the optimal solution without a proof of optimality. We assume optimal solutions are given for training problems.1 Our node selection oracle ⇡⇤ S will always expand the node whose feasible set contains the optimal solution. We call such a node an optimal node. For example, in Figure 1, the oracle knows beforehand that the optimal solution is x = 1, y = 2, thus it will only search along edges y ≥2 and x 1; the optimal nodes are shown in red circles. All other non-optimal nodes are fathomed by the node pruning oracle ⇡⇤ P , if not already fathomed by standard rules discussed in Section 2. We denote the optimal node at depth d by F⇤ d where d 2 [0, D] and F⇤ 0 is the root node. Imitation Learning. We formulate the above approach as a sequential decision-making process, defined by a state space S, an action space A and a policy space ⇧. A trajectory consists of a sequence of states s1, s2, . . . , sT and actions a1, a2, . . . , aT . A policy ⇡2 ⇧maps a state to an action: ⇡(st) = at. In our B&B setting, S is the whole tree of nodes visited so far, with the bounds computed at these nodes. The node selection policy ⇡S has an action space {select node Fi : Fi 2 queue of active nodes}, which depends on the current state st. The node pruning policy ⇡P is a binary classifier that predicts a class in {prune, expand}, given st and the most recently selected node (the policy is only applied when this node was not fathomed). At training time, the oracle provides an optimal action a⇤for any possible state s 2 S. Our goal is to learn a policy that mimics the oracle’s actions along the trajectory of states encountered by the policy. Let φ: Fi ! Rp and : Fi ! Rq be feature maps for ⇡S and ⇡P respectively. The imitation problem can be reduced to supervised learning [9, 10, 11]: the policy (classifier/regressor) takes a feature-vector description of the state st and attempts to predict the oracle action a⇤ t . A generic node selection policy assigns a score to each active node and pops the highest-scoring one. For example, DFS uses a node’s depth as its score; best-bound-first search uses a node’s lower bound as its score. Following this scheme, we define the score of a node i as wT φ(Fi) and ⇡S(st) = select node arg maxFi2L wT φ(Fi), where w is a learned weight vector and L is the queue of active nodes. We obtain w by learning a linear ranking function that defines a total order on the set of nodes on the priority queue: wT (φ(Fi) −φ(Fi0)) > 0 if Fi > Fi0. During training, we only specify the order between optimal nodes and non-optimal nodes. However, at test time, a total order is obtained by the classifier’s automatic generalization: non-optimal nodes close to optimal nodes in the feature space will be ranked higher. DAgger is an iterative imitation learning algorithm. It repeatedly retrains the policy to make decisions that agree better with the oracle’s decisions, in those situations that were encountered when running past versions of the policy. Thus, it learns to deal well with a realistic distribution of situations that may actually arise at test time. Our training algorithm is shown in Algorithm 1. Algorithm 2 illustrates how we collect examples during B&B. In words, when pushing an optimal node to the queue, we want it ranked higher than all nodes currently on the queue; when pushing a nonoptimal node, we want it ranked lower than the optimal node on the queue if there is one (note that at any time there can be at most one optimal node on the queue); when popping a node from the queue, we want it pruned if it is not optimal. In the left part of Figure 1, we show training examples collected from the oracle policy. 4 Analysis We show that our method has the following upper bound on the expected number of branches. Theorem 1. Given a node selection policy which ranks some non-optimal node higher than an optimal node with probability ✏, a node pruning policy which expands a non-optimal node with probability ✏1 and prunes an optimal node with probablity ✏2, assuming ✏, ✏1, ✏2 2 [0, 0.5] under the 1For prediction tasks, the optimal solutions usually come for free in the training set; otherwise, an off-theshelf solver can be used. 4 Algorithm 2 Running B&B policies and collect example for problem Q procedure COLLECTEXAMPLE(Q, ⇡S, ⇡P ) L = {F(Q) 0 }, training set D(Q) S = {}, D(Q) P = {}, i 0 while L 6= ; do F(Q) k ⇡S pops a node from L, if F(Q) k is optimal then D(Q) P D(Q) P [ n⇣ (F(Q) k ), expand ⌘o else D(Q) P D(Q) P [ n⇣ (F(Q) k ), prune ⌘o if F(Q) k is not fathomed and ⇡P (F(Q) k ) = expand then F(Q) i+1, F(Q) i+2 expand F(Q) k , L L [ {F(Q) i+1, F(Q) i+2}, i i + 2 if an optimal node F⇤(A) d 2 L then D(Q) S D(Q) S [ n⇣ φ(F⇤(Q) d ) −φ(F(Q) i0 ), 1 ⌘ : F(Q) i0 2 L and F(Q) i0 6= F(Q)⇤ d o return D(Q) S , D(Q) P policy’s state distribution, we have expected number of branches σ(✏, ✏1, ✏2) D X d=0 (1 −✏2)d + (1 −✏2)D+1 (1 −✏)✏1 1 −2✏1 + 1 ! D, where σ(✏, ✏1, ✏2) = ⇣ 1−✏2 1−2✏✏1 + ✏2 1−2✏1 ⌘ ✏✏1. Let the optimal node at depth d be F⇤ d. Note that at each push step, there is at most one optimal node on the queue. Consider a queue having one optimal node F⇤ d and m non-optimal nodes ranked before the optimal one. The following lemma is useful in our proof: Lemma 1. The average number of pops before we get to F⇤ d is m 1−2✏✏1 , among which the number of branches is NB(m, opt) = m✏1 1−2✏✏1 , and the number of non-optimal nodes pushed after F⇤ d is Npush(m, opt) = m✏1 1−2✏✏1 ⇥ 2(1 −✏)2 + 2✏(1 −✏) ⇤ = 2m✏1(1−✏) 1−2✏✏1 , where opt indicates the situation where one optimal node is on the queue. Consider a queue having no optimal node and m non-optimal nodes, which means an optimal internal node has been pruned or the optimal leaf has been found. We have Lemma 2. The average number of pops to empty the queue is m 1−2✏1 , among which the number of branches is NB(m, opt) = m✏1 1−2✏1 , where opt indicates the situation where no optimal node is on the queue. Proofs of the above two lemmas are given in Appendix A. Let T(Md, F⇤ d) denote the number of branches until the queue is empty, after pushing F⇤ d to a queue with Md nodes. The total number of branches during the B&B process is T(0, F⇤ 0 ). When pushing F⇤ d, we compare it with all M nodes on the queue, and the number of non-optimal nodes ranked before it follows a binomial distribution md ⇠Bin(✏, Md). We then have the following two cases: (a) F⇤ d will be pruned with probability ✏2: the expected number of branches is NB(md, opt); (b) F⇤ d will not be pruned with probability 1 −✏2: we first pop all nodes before F⇤ d, resulting in Npush(md, opt) new nodes after it; we then expand F⇤ d, get F⇤ d+1, and push it on a queue with Md+1 = Npush(md, opt) + Md −md + 1 nodes. Thus the total expected number of branches is NB(md, opt) + T(Md+1, F⇤ d+1). The recursion equation is T(Md, F⇤ d)=Emd⇠Bin(✏,Md) ⇥ (1−✏2) + NB(md, opt)+1+T(Md+1, F⇤ d+1) , +✏2NB(Md, opt) ⇤ . At termination, we have T(MD, F⇤ D)=EmD⇠Bin(✏,MD) ⇥ (1−✏2) + NB(mD, opt)+NB(MD−mD, opt) , +✏2NB(MD, opt) ⇤ . 5 Note that we ignore node fathoming in this recursion. The path of optimal nodes may stop at F⇤ d where d<D, thus T(Md, F⇤ d) is an upper bound of the actual expected number of branches. The expectation over md can be computed by replacing md by ✏Md since all terms are linear in md. Solving for T(0, F⇤ 0 ) gives the upper bound in Theorem 1. Details are given in Appendix B. For the oracle, ✏=✏1=✏2=0 and it branches at most D times when solving a problem. For nonoptimal policies, as for all pruning-based methods, our method bears the risk of missing the optimal solution. The depth at which the first optimal node is pruned follows a geometric distribution and its mean is 1/✏2. In practice, we can put higher weight on the class prune to learn a high-precision classifier (smaller ✏2). 5 Experiments Datasets. We apply our method to LP-based B&B for solving MILP problems. We use four problem libraries suggested in [12]. MIK2 [13] is a set of MILP problems with knapsack constraints. Regions and Hybrid are sets of problems of determining the winner of a combinatorial auction, generated from different distributions by the Combinatorial Auction Test Suite (CATS)3 [14]. CORLAT [15] is a real dataset used for the construction of a wildlife corridor for grizzly bears in the Northern Rockies region. The number of variables ranges from 300 to over 1000; the number of constraints ranges from 100 to 500. Each problem set is split into training, test and development sets. Details of the datasets are presented in Appendix C. For each problem, we run SCIP until optimality, and take the (single) returned solution to be the optimal one for purposes of training. We exclude problems which are solved at the root in our experiment. Policy learning. For each problem set, we split its training set into equal-sized subsets randomly and run DAgger on one subset in each iteration until we have taken two passes over the entire set. Too many passes may result in overfitting for policies in later iterations. We use LIBLINEAR [16] in the step of training classifiers in Algorithm 1. Since mistakes during early stages of the search are more serious, our training places higher weight on examples from nodes closer to the root for both policies. More specifically, the example weights at each level of the B&B tree decay exponentially at rate 2.68/D where D is the maximum depth4, corresponding to the fact that the subtree size increases exponentially. For pruning policy training, we put a higher weight (tuned from {1, 2, 4, 8}) on the class prune to counter data imbalance and to learn a high-precision classifier as discussed earlier. The class weight and SVM’s penalty parameter C are tuned for each library on its development set. The features we used can be categorized into three groups: (a) node features, computed from the current node, including lower bound5, estimated objective, depth, whether it is a child/sibling of the last processed node; (b) branching features, computed from the branching variable leading to the current node, including pseudocost, difference between the variable’s value in the current LP solution and the root LP solution, difference between its value and its current bound; (c) tree features, computed from the B&B tree, including global upper and lower bounds, integrality gap, number of solutions found, whether the gap is infinite. The node selection policy includes primarily node features and branching feature, and the node pruning policy includes primarily branching features and tree features. To combine these features with depth of the node, we partition the tree into 10 uniform levels, and features at each level are stacked together. Since the range of objective values varies largely across problems, we normalize features related to the bound by dividing its actual value by the root node’s LP objective. All of the above features are cheap to obtain. Actually they use information recorded by most solvers , thus do not result in much overhead. Results. We compare with SCIP (Version 3.1.0) (using Cplex 12.6 as the LP solver), and Gurobi (Version 5.6.2). SCIP’s default node selection strategy switches between depth-first search and best-first search according a plunging depth computed online. Gurobi applies different strategies (including pruning) for subtrees rooted at different nodes [17, 18]. Both solvers adopt the branch2Downloaded from http://ieor.berkeley.edu/˜atamturk/data 3Available at http://www.cs.ubc.ca/˜kevinlb/CATS/ 4The rate is chosen such that examples at depth 1 are weighted by 5 and examples at 0.6D by 1. 5If the node is a child of the most recent processed node, its LP is not solved yet and its bounds will be the same as its parent’s. 6 Dataset Ours Ours (prune only) SCIP (time) Gurobi (node) speed OGap IGap speed OGap IGap OGap IGap OGap IGap MIK 4.69⇥0.04‰ 2.29% 4.45⇥0.04‰ 2.29% 3.02‰ 1.89% 0.45‰ 2.99% Regions 2.30⇥7.21‰ 3.52% 2.45⇥7.68‰ 3.58% 6.80‰ 3.48% 21.94‰ 5.67% Hybrid 1.15⇥0.00‰ 3.22% 1.02⇥0.00‰ 3.55% 0.79‰ 4.76% 3.97‰ 5.20% CORLAT 1.63⇥8.99% 22.64% 4.44⇥8.91% 17.62% 6.67% fail 2.67% fail Table 1: Performance on solving MILP problems from four libraries. We compare two versions of our algorithm (one with both search and pruning policies and one with only the pruning policy) with SCIP with a node limit (SCIP (node)) and Gurobi with a time limit (Gurobi (time)). We report results on three measures: speedup with respect to SCIP in default setting, the optimality gap (OGap), computed as the percentage difference between the best objective value found and the optimal objective value, the integrality gap (IGap), computed as the percentage difference between the upper and lower bounds. Here ”fail” means the solver cannot find a feasible solution. The numbers are averaged over all instances in each dataset. Bolded scores are statistically tied with the best score according to a t-test with rejection threshold 0.05. and-cut framework combined with presolvers and primal heuristics. Our solver is implemented based on SCIP and also calls Cplex 12.6 to solve LPs. We compare runtime with SCIP in its default setting, which does not terminate before a proved status (e.g. solved, infeasible, unbounded). To compare the tradeoff between runtime and solution quality, we first run our dynamic B&B algorithm and obtain the average runtime; we then run SCIP with the same time limit. Since runtime is rather implementation-dependent and Gurobi is about four times faster than SCIP [8], we use the number of nodes explored as time measure for Gurobi. As Gurobi and SCIP apply roughly the same techniques (e.g. cutting-plane generation, heuristics) at each node, we believe fewer nodes explored implies runtime improvement had we implemented our algorithm based on Gurobi. Similarly, we set Gurobi’s node limit to the average number of nodes explored by our algorithm. The results are summarized in Table 1. Our method speeds up SCIP up to a factor of 4.7 with less than 1% loss in objectives of the found solutions on most datasets. On CORLAT, the loss is larger (within 10%) since these problems are generally harder; both SCIP and Gurobi failed to find even one feasible solution given a time/node limit on some problems. Note that SCIP in its default setting works better on Regions and Hybrid, and Gurobi better on the other two, while our adaptive solver performs well consistently. This shows that effectiveness of strategies are indeed problem dependent. Ablation analysis. To assess the effect of node selection and pruning separately, we report details of their classification performance in Tabel 2. Both policies cost negligible time compared with the total runtime. We also show result of our method with the pruning policy only in Table 1. We can see that the major contribution comes from pruning. We believe there are two main reasons: a) there may not be enough information in the features to differentiate an optimal node from non-optimal ones; b) the effect of node selection may be covered by other interacting techniques, for instance, a non-optimal node could lead to better bounds due to the application of cutting planes. Informative features. We rank features on each level of the tree according to the absolute values of their weights for each library. Although different problem sets have its own specific weights and rankings of features, a general pattern is that closer to the top of the tree the node selection policy prefers nodes which are children of the most recently solved node (resembles DFS) and have better bounds; in lower levels it still prefers deeper nodes but also relies on pseudocosts of the branching variable and estimates of the node’s objective, since these features get more accurate as the search goes deeper. The node pruning policy tends to not pruning when there are few solutions found and the gap is infinite; it also relies much on differences between the branching variable’s value, its value in the root LP solution and its current bound. Cross generalization. To testify that our method learns strategies specific to the problem type, we apply the learned policies across datasets, i.e., using policies trained on dataset A to solve problems in dataset B. We plot the result as a heatmap in Figure 3, using a measure combining runtime and the 7 MIK CORLAT Regions Hybrid Test Dataset MIK CORLAT Regions Hybrid Policy Dataset 0.00 0.15 0.30 0.45 0.60 0.75 0.90 1 / (time + opt. gap) Figure 3: Performance of policies cross datasets. The y-axis shows datasets on which a policy is trained. The x-axis shows datasets on which a policy is tested. Each block shows 1/ (runtime+optimality gap), where runtime and gap are scaled to [0, 1] for experiments on the same test dataset. Values in each row are normalized by the diagonal element on that row. Dataset prune rate prune err comp err time (%) FP FN selectprune MIK 0.48 0.01 0.46 0.34 0.02 0.04 Regions 0.55 0.20 0.32 0.32 0.00 0.00 Hybrid 0.02 0.00 0.98 0.44 0.02 0.02 CORLAT 0.24 0.00 0.76 0.80 0.01 0.01 Table 2: Classification performance of the node selection and pruning policy. We report the percentage of nodes pruned (prune rate), false positive (FP) and false negative (FN) error rate of the pruning policy, comparison error of the selection policy (only for comparisons between one optimal and one non-optimal node), as well as the percentage of time used on decision making. optimality gap. We invert the values so that hotter blocks in the figure indicate better performance. Note that there is a hot diagonal. In addition, MIK and CORLAT are relatively unique: policies trained on other datasets lose badly there. On the other hand, Hybrid is more friendly to other policies. This probably suggests that for this library most strategies works almost equally well. 6 Related Work There is a large amount of work on applying machine learning to make dynamic decisions inside a long-running solver. The idea of learning heuristic functions for combinatorial search algorithms dates back to [19, 20, 21]. Recently, [22] aims to balance load in parallel B&B by predicting the subtree size at each node. Nodes of the largest predicted subtree size are further split into smaller problems and sent to the distributed environment with other nodes in a batch. In [23], a SVM classifier is used to decide if probing (a bound tightening technique) should be used at a node in B&B. However, both prior methods handle a relatively simple setting where the model only predicts information about the current state, so that they can simply train by standard supervised learning. This is manifestly not the case for us. Since actions have influence over future states, standard supervised learning does not work as well as DAgger, an imitation learning technique that focuses on situations most likely to be encountered at test time. Our work is also closely related to speedup learning [24], where the learner observes a solver solving problems and learns patterns from past experience to speed up future computation. [25] and [26] learned ranking functions to control beam search (a setting similar to ours) in planning and structured prediction respectively. [27] used supervised learning to imitate strong branching in B&B for solving MIP. The primary distinction in our work is that we explicitly formulate the problem as a sequential decision-making process, thus take aciton’s effects on future into account. We also add the pruning step besides prioritization for further speedup. 7 Conclusion We have presented a novel approach to learn an adaptive node searching order for different classes of problems in branch-and-bound algorithms. Our dynamic solver learns when to leave an unpromising area and when to stop for a good enough solution. We have demonstrated on multiple datasets that compared to a commercial solver, our approach finds solutions with a better objective and establishes a smaller gap, using less time. In the future, we intend to include a time budget in our model so that we can achieve a user-specified trade-off between solution quality and searching time. We are also interested in applying multi-task learning to transfer policies between different datasets. 8 References [1] A. H. Land and A. G. Doig. An automatic method of solving discrete programming problems. 28:497– 520, 1960. [2] Min Sun, Murali Telaprolu, Honglak Lee, and Silvio Savarese. Efficient and exact MAP-MRF inference using branch and bound. In AISTATS, 2012. [3] J¨org Hendrik Kappes, Markus Speth, Gerhard Reinelt, and Christoph Schn¨orr. Towards efficient and exact MAP-inference for large scale discrete computer vision problems via combinatorial optimization. In CVPR, 2013. [4] Sebastian Riedel, David A. Smith, and Andrew McCallum. Parse, price and cut - delayed column and row generation for graph based parsers. In EMNLP, 2012. [5] Xian Qian and Yang Liu. Branch and bound algorithm for dependency parsing with non-local features. In TACL, 2013. [6] Alexander G. Schwing and Raquel Urtasun. Efficient exact inference for 3D indoor scene understanding. In ECCV, 2012. [7] Tal Pupko, Itsik Pe’er, Masami Hasegawa, Dan Graur, and Nir Friedman. A branch-and-bound algorithm for the inference of ancestral amino-acid sequences when the replacement rate varies among sites: Application to the evolution of five gene families. 18:1116–1123, 2002. [8] Hans Mittelmann. Mixed integer linear programming benchmark (miplib2010), 2014. [9] Umar Syed and Robert E. Schapire. A reduction from apprenticeship learning to classification. In NIPS, 2010. [10] Pieter Abbeel and Andrew Y. Ng. Apprenticeship learning via inverse reinforcement learning. In ICML, 2004. [11] St´ephane. Ross, Geoffrey J. Gordon, and J. Andrew. Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of AISTATS, 2011. [12] Frank Hutter, Holger Hoos, and Kevin Leyton-Brown. Automated configuration of mixed integer programming solvers. 2010. [13] Alper Atamt¨urk. On the facets of the mixedinteger knapsack polyhedron. 98:145–175, 2003. [14] Kevin Leyton-Brown, Mark Pearson, and Yoav Shoham. Towards a universal test suite for combinatorial auction algorithms. In Proceedings of ACM Conference on Electronic Commerce, 2000. [15] Carla P. Gomes, Willem-Jan van Hoeve, and Ashish Sabharwal. Connections in networks: a hybrid approach. 2008. [16] Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. LIBLINEAR: A library for large linear classification. Journal of Machine Learning Research, 9:1871–1874, 2008. [17] Zonghao Gu, Robert E. Bixby, and Ed Rothberg. The latest advances in mixed-integer programming solvers. [18] Ed Rothberg. Parallelism in linear and mixed integer programming. [19] Matthew Lowrie and Benjamin Wah. Learning heuristic functions for numeric optimization problems. In Proceedings of the Twelfth Annual International Computer Software & Applications Conference, 1988. [20] Justin A. Boyan and Andrew W. Moore. Learning evaluation functions for global optimization and boolean satisfiability. In National Conference on Artificial Intelligence, 1998. [21] Sudeshna Sarkar, P. P. Chakrabarti, and Sujoy Ghose. Learning while solving problems in best first search. 28:535–242, 1998. [22] Lars Otten and Rina Dechter. A case study in complexity estimation: Towards parallel branch-and-bound over graphical models. In UAI, 2012. [23] Giacomo Nannicini, Pietro Belotti, Jon Lee, Jeff Linderoth, Franc¸ois Margot, and Andreas W¨achter. A probing algorithm for minlp with failure prediction by svm. 2011. [24] Alan Fern. Speedup learning. 2007. [25] Yuehua Xu and Alan Fern. Learning linear ranking functions for beam search with application to planning. 10:1571–1610, 2009. [26] Hal Daum´e III and Daniel Marcu. Learning as search optimization: Approximate large margin methods for structured prediction. In ICML, 2005. [27] Alejandro Marcos Alvarez, Quentin Louveaux, and Louis Wehenkel. A supervised machine learning approach to variable branching in branch-and-bound. In ECML, 2014. 9
|
2014
|
180
|
5,270
|
Bayesian Inference for Structured Spike and Slab Priors Michael Riis Andersen, Ole Winther & Lars Kai Hansen DTU Compute, Technical University of Denmark DK-2800 Kgs. Lyngby, Denmark {miri, olwi, lkh}@dtu.dk Abstract Sparse signal recovery addresses the problem of solving underdetermined linear inverse problems subject to a sparsity constraint. We propose a novel prior formulation, the structured spike and slab prior, which allows to incorporate a priori knowledge of the sparsity pattern by imposing a spatial Gaussian process on the spike and slab probabilities. Thus, prior information on the structure of the sparsity pattern can be encoded using generic covariance functions. Furthermore, we provide a Bayesian inference scheme for the proposed model based on the expectation propagation framework. Using numerical experiments on synthetic data, we demonstrate the benefits of the model. 1 Introduction Consider a linear inverse problem of the form: y = Ax + e, (1) where A ∈RN×D is the measurement matrix, y ∈RN is the measurement vector, x ∈RD is the desired solution and e ∈RN is a vector of corruptive noise. The field of sparse signal recovery deals with the task of reconstructing the sparse solution x from (A, y) in the ill-posed regime where N < D. In many applications it is beneficial to encourage a structured sparsity pattern rather than independent sparsity. In this paper we consider a model for exploiting a priori information on the sparsity pattern, which has applications in many different fields, e.g., structured sparse PCA [1], background subtraction [2] and neuroimaging [3]. In the framework of probabilistic modelling sparsity can be enforced using so-called sparsity promoting priors, which conventionally has the following form p(x λ) = D Y i=1 p(xi λ), (2) where p(xi λ) is the marginal prior on xi and λ is a fixed hyperparameter controlling the degree of sparsity. Examples of such sparsity promoting priors include the Laplace prior (LASSO [4]), and the Bernoulli-Gaussian prior (the spike and slab model [5]). The main advantage of this formulation is that the inference schemes become relatively simple due to the fact that the prior factorizes over the variables xi. However, this fact also implies that the models cannot encode any prior knowledge of the structure of the sparsity pattern. One approach to model a richer sparsity structure is the so-called group sparsity approach, where the set of variables x has been partitioned into groups beforehand. This 1 approach has been extensively developed for the ℓ1 minimization community, i.e. group LASSO, sparse group LASSO [6] and graph LASSO [7]. Let G be a partition of the set of variables into G groups. A Bayesian equivalent of group sparsity is the group spike and slab model [8], which takes the form p(x z) = G Y g=1 (1 −zg) δ (xg) + zgN xg 0, τIg , p(z λ = G Y g=1 Bernoulli zg λg , (3) where z ∈[0, 1]G are binary support variables indicating whether the variables in different groups are active or not. Other relevant work includes [9] and [10]. Another more flexible approach is to use a Markov random field (MRF) as prior for the binary variables [2]. Related to the MRF-formulation, we propose a novel model called the Structured Spike and Slab model. This model allows us to encode a priori information of the sparsity pattern into the model using generic covariance functions rather than through clique potentials as for the MRF-formulation [2]. Furthermore, we provide a Bayesian inference scheme based on expectation propagation for the proposed model. 2 The structured spike and slab prior We propose a hierarchical prior of the following form: p(x γ) = D Y i=1 p(xi g(γi)), p(γ) = N γ µ0, Σ0 , (4) where g : R →R is a suitable injective transformation. That is, we impose a Gaussian process [11] as a prior on the parameters γi. Using this parametrization, prior knowledge of the structure of the sparsity pattern can be encoded using µ0 and Σ0. The mean value µ0 controls the prior belief of the support and the covariance matrix determines the prior correlation of the support. In the remainder of this paper we restrict p(xi|g(γi)) to be a spike and slab model, i.e. p(xi zi) = (1 −zi)δ(xi) + ziN xi 0, τ0 , zi ∼Ber (g(γi)) . (5) This formulation clearly fits into eq. (4) when zi is marginalized out. Furthermore, we will assume that g is the standard Normal CDF, i.e. g(x) = φ(x). Using this formulation, the marginal prior probability of the i’th weight being active is given by: p(zi = 1) = Z p(zi = 1 γi)p(γi)dγi = Z φ(γi)N γi µi, Σii dγi = φ µi √1 + Σii . (6) This implies that the probability of zi = 1 is 0.5 when µi = 0 as expected. In contrast to the ℓ1-based methods and the MRF-priors, the Gaussian process formulation makes it easy to generate samples from the model. Figures 1(a), 1(b) each show three realizations of the support from the prior using a squared exponential kernel of the form: Σij = 50 exp(−(i −j)2 /2s2) and µi is fixed such that the expected level of sparsity is 10%. It is seen that when the scale, s, is small, the support consists of scattered spikes. As the scale increases, the support of the signals becomes more contiguous and clustered, where the sizes of the clusters increase with the scale. To gain insight into the relationship between γ and z, we consider the two dimensional system with µi = 0 and the following covariance structure Σ0 = κ 1 ρ ρ 1 , κ > 0. (7) The correlation between z1 and z2 is then computed as a function of ρ and κ by sampling. The resulting curves in Figure 1(c) show that the desired correlation is an increasing function of ρ as expected. However, the figure also reveals that for ρ = 1, i.e. 100% correlation between the γ parameters, does not imply 100% correlation of the support variables z. This 2 , (a) Scale s = 0.1 (b) Scale s = 5 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 ρ = Correlation of γ1 and γ2 Correlation of z1 and z2 κ = 1.0 κ = 10.0 κ = 10000.0 (c) Correlation of support Figure 1: (a,b) Realizations of the support z from the prior distribution using a squared exponential covariance function for γ, i.e. Σij = 50 exp(−(i −j)2/2s2) and µ is fixed to match an expected sparsity rate K/D of 10%. (c) Correlation of z1 and z2 as a function of ρ for 5 different values of A obtained by sampling. This prior mean function is fixed at µi = 0 for all i. is due to the fact that there are two levels of uncertainty in the prior distribution of the support. That is, first we sample γ, and then we sample the support z conditioned on γ. The proposed prior formulation extends easily to the multiple measurement vector (MMV) formulation [12, 13, 14], in which multiple linear inverse problems are solved simultaneously. The most straightforward way is to assume all problem instances share the same support variable, commonly known as joint sparsity [14] p X z = T Y t=1 D Y i=1 (1 −zi)δ(xt i) + ziN xt i 0, τ , (8) p(zi γi) = Ber zi φ(γi) , (9) p(γ) = N γ µ0, Σ0 , (10) where X = x1 . . . xT ∈RD×T . The model can also be extended to problems, where the sparsity pattern changes in time p X z = T Y t=1 D Y i=1 (1 −zt i)δ(xt i) + zt iN xt i 0, τ , (11) p(zt i γt i) = Ber zt i φ(γt i) , (12) p(γ1, ..., γT ) = N γ1 µ0, Σ0 T Y t=2 N γt (1 −α)µ0 + αγt−1, βΣ0 , (13) where the parameters 0 ≤α ≤1 and β ≥0 controls the temporal dynamics of the support. 3 Bayesian inference using expectation propagation In this section we combine the structured spike and slab prior as given in eq. (5) with an isotropic Gaussian noise model and derive an inference algorithm based on expectation propagation. The likelihood function is p(y x) = N y Ax, σ2 0I and the joint posterior distribution of interest thus becomes p(x, z, γ y) = 1 Z p(y x)p(x z)p(z γ)p(γ) (14) = 1 Z N y Ax, σ2 0I | {z } f1 D Y i=1 (1 −zi)δ(xi) + ziN xi 0, τ0 | {z } f2 D Y i=1 Ber zi φ (γi) | {z } f3 N γ µ0, Σ0 | {z } f4 , 3 where Z is the normalization constant independent of x, z and γ. Unfortunately, the true posterior is intractable and therefore we have to settle for an approximation. In particular, we apply the framework of expectation propagation (EP) [15, 16], which is an iterative deterministic framework for approximating probability distributions using distributions from the exponential family. The algorithm proposed here can be seen as an extension of the work in [8]. As shown in eq. (14), the true posterior is a composition of 4 factors, i.e. fa for a = 1, .., 4. The terms f2 and f3 are further decomposed into D conditionally independent factors f2(x, z) = D Y i=1 f2,i(xi, zi) = D Y i=1 (1 −zi)δ(xi) + ziN xi 0, τ0 , (15) f3(z, γ) = D Y i=1 f3,i(zi, γi) = D Y i=1 Ber zi φ (γi) (16) The idea is then to approximate each term in the true posterior density, i.e. fa, by simpler terms, i.e. ˜fa for a = 1, .., 4. The resulting approximation Q (x, z, γ) then becomes Q (x, z, γ) = 1 ZEP 4 Y a=1 ˜fa (x, z, γ) . (17) The terms ˜f1 and ˜f4 can be computed exact. In fact, ˜f4 is simply equal to the prior over γ and ˜f1 is a multivariate Gaussian distribution with mean ˜m1 and covariance matrix ˜V1 determined by ˜V −1 1 ˜m1 = 1 σ2 AT y and ˜V −1 1 = 1 σ2 AT A. Therefore, we only have to approximate the factors ˜f2 and ˜f3 using EP. Note that the exact term f1 is a distribution of y conditioned on x, whereas the approximate term ˜f1 is a function of x that depends on y through ˜m1 and ˜V1 etc. In order to take full advantage of the structure of the true posterior distribution, we will further assume that the terms ˜f2 and ˜f3 also are decomposed into D independent factors. The EP scheme provides great flexibility in the choice of the approximating factors. This choice is a trade-offbetween analytical tractability and sufficient flexibility for capturing the important characteristics of the true density. Due to the product over the binary support variables {zi} for i = 1, .., D, the true density is highly multimodal. Finally, f2 couples the variables x and z, while f3 couples the variables z and γ. Based on these observations, we choose ˜f2 and ˜f3 to have the following forms ˜f2 (x, z) ∝ D Y i=1 N xi ˜m2,i, ˜v2,i D Y i=1 Ber zi φ (˜γ2,i) = N x ˜m2, ˜V2 D Y i=1 Ber zi φ (˜γ2,i) , ˜f3 (z, γ) ∝ D Y i=1 Ber zi φ (˜γ3,i) D Y i=1 N γi ˜µ3,i, ˜σ3,i = N γ ˜µ3, ˜Σ3 D Y i=1 Ber zi φ (˜γ2,i) , where ˜m2 = [ ˜m2,1, .., ˜m2,D]T , ˜V2 = diag (˜v2,1, ..., ˜v2,D) and analogously for ˜µ3 and ˜Σ3. These choices lead to a joint variational approximation Q(x, z, γ) of the form Q (x, z, γ) = N x ˜m, ˜V D Y i=1 Ber zi g (˜γi) N γ ˜µ, ˜Σ , (18) where the joint parameters are given by ˜V = ˜V −1 1 + ˜V −1 2 −1 , ˜m = ˜V ˜V −1 1 ˜m1 + ˜V −1 2 ˜ m2 (19) ˜Σ = ˜Σ−1 3 + ˜Σ−1 4 −1 , ˜µ = ˜Σ ˜Σ−1 3 ˜µ3 + ˜Σ−1 4 ˜µ4 (20) ˜γj = φ−1 "(1 −φ(˜γ2,j)) (1 −φ(˜γ3,j)) φ(˜γ2,j)φ(˜γ3,j) + 1 −1# , ∀j ∈{1, .., D} . (21) where φ−1(x) is the probit function. The function in eq. (21) amounts to computing the product of two Bernoulli densities parametrized using φ (·). 4 • Initialize approximation terms ˜fa for a = 1, 2, 3, 4 and Q • Repeat until stopping criteria – For each ˜f2,i: ∗Compute cavity distribution: Q\2,i ∝ Q ˜ f2,i ∗Minimize: KL f2,iQ\2,iQ2,new w.r.t. Qnew ∗Compute: ˜f2,i ∝Q2,new Q\2,i to update parameters ˜m2,i, ˜v2,i and ˜γ2,i. – Update joint approximation parameters: ˜m, ˜V and ˜γ – For each ˜f3,i: ∗Compute cavity distribution: Q\3,i ∝ Q ˜ f3,i ∗Minimize: KL f3,iQ\3,iQ3,new w.r.t. Qnew ∗Compute: ˜f3,i ∝Q3,new Q\3,i to update parameters ˜µ3,i, ˜σ3,i and ˜γ3,i – Update joint approximation parameters: ˜µ, ˜Σ and ˜γ Figure 2: Proposed algorithm for approximating the joint posterior distribution over x, z and γ. 3.1 The EP algorithm Consider the update of the term ˜fa,i for a given a and a given i, where ˜fa = Q i ˜fa,i. This update is performed by first removing the contribution of ˜fa,i from the joint approximation by forming the so-called cavity distribution Q\a,i ∝Q ˜fa,i (22) followed by the minimization of the Kullbach-Leibler [17] divergence between fa,iQ\a,i and Qa,new w.r.t. Qa,new. For distributions within the exponential family, minimizing this form of KL divergence amounts to matching moments between fa,iQ\2,i and Qa,new [15]. Finally, the new update of ˜fa,i is given by ˜fa,i ∝Qa,new Q\a,i . (23) After all the individual approximation terms ˜fa,i for a = 1, 2 and i = 1, .., D have been updated, the joint approximation is updated using eq. (19)-(21). To minimize the computational load, we use parallel updates of ˜f2,i [8] followed by parallel updates of ˜f3,i rather than the conventional sequential update scheme. Furthermore, due to the fact that ˜f2 and ˜f3 factorizes, we only need the marginals of the cavity distributions Q\a,i and the marginals of the updated joint distributions Qa,new for a = 2, 3. Computing the cavity distributions and matching the moments are tedious, but straightforward. The moments of fa,iQ\2,i require evaluation of the zeroth, first and second order moment of the distributions of the form φ(γi)N γi µi, Σii . Derivation of analytical expressions for these moments can be found in [11]. See the supplementary material for more details. The proposed algorithm is summarized in figure 2. Note, that the EP framework also provides an approximation of the marginal likelihood [11], which can be useful for learning the hyperparameters of the model. Furthermore, the proposed inference scheme can easily be extended to the MMV formulation eq. (8)-(10) by introducing a ˜f t 2,i for each time step t = 1, .., T. 5 3.2 Computational details Most linear inverse problems of practical interest are high dimensional, i.e. D is large. It is therefore of interest to simplify the computational complexity of the algorithm as much as possible. The dominating operations in this algorithm are the inversions of the two D × D covariance matrices in eq. (19) and eq. (20), and therefore the algorithm scales as O D3 . But ˜V1 has low rank and ˜V2 is diagonal, and therefore we can apply the Woodbury matrix identity [18] to eq. (19) to get ˜V = ˜V2 −˜V2AT σ2 oI + A ˜V2AT −1 A ˜V2. (24) For N < D, this scales as O ND2 , where N is the number of observations. Unfortunately, we cannot apply the same identity to the inversion in eq. (20) since ˜Σ4 has full rank and is non-diagonal in general. The eigenvalue spectrum of many prior covariance structures of interest, i.e. simple neighbourhoods etc., decay relatively fast. Therefore, we can approximate Σ0 with a low rank approximation Σ0 ≈P ΛP T , where Λ ∈RR×R is a diagonal matrix of the R largest eigenvalues and P ∈RD×R is the corresponding eigenvectors. Using the R-rank approximation, we can now invoke the Woodbury matrix identity again to get: ˜Σ = ˜Σ3 + ˜Σ3P Λ + P T ˜Σ3P −1 P T ˜Σ3. (25) Similarly, for R < D, this scales as O RD2 . Another better approach that preserves the total variance would be to use probabilistic PCA [19] to approximate Σ0. A third alternative is to consider other structures for Σ0, which facilitate fast matrix inversions such as block structures and Toeplitz structures. Numerical issues can arise in EP implementations and in order to avoid this, we use the same precautions as described in [8]. 4 Numerical experiments This section describes a series of numerical experiments that have been designed and conducted in order to investigate the properties of the proposed algorithm. 4.1 Experiment 1 The first experiment compares the proposed method to the LARS algorithm [20] and to the BG-AMP method [21], which is an approximate message passing-based method for the spike and slab model. We also compare the method to an ”oracle least squares estimator” that knows the true support of the solutions. We generate 100 problem instances from y = Ax0 + e, where the solutions vectors have been sampled from the proposed prior using the kernel Σi,j = 50 exp(−||i −j||2 2/(2 · 102)), but constrained to have a fixed sparsity level of the K/D = 0.25. That is, each solution x0 has the same number of non-zero entries, but different sparsity patterns. We vary the degree of undersampling from N/D = 0.05 to N/D = 0.95. The elements of A ∈RN×250 are i.i.d Gaussian and the columns of A have been scaled to unit ℓ2-norm. The SNR is fixed at 20dB. We apply the four methods to each of the 100 problems, and for each solution we compute the Normalized Mean Square Error (NMSE) between the true signal x0 and the estimated signal ˆx as well as the F-measure: NMSE = ||x0 −ˆx||2 ||x0||2 F = 2 precision · recall precision + recall, (26) where precision and recall are computed using a MAP estimate of the support. For the structured spike and slab method, we consider three different covariance structures: Σij = κ · δ(i −j), Σij = κ exp(−||i −j||2/s) and Σij = κ exp(−||i −j||2 2/(2s2)) with parameters κ = 50 and s = 10. In each case, we use a R = 50 rank approximation of Σ. The average results are shown in figures 3(a)-(f). Figure (a) shows an example of one of the sampled vectors x0 and figure (b) shows the three covariance functions. From figure 3(c)-(d), it is seen that the two EP methods with neighbour correlation are able to improve the phase transition point. That is, in order to obtain a reconstruction 6 0 50 100 150 200 250 −3 −2 −1 0 1 2 3 Signal domain Signal Example signal x (a) Example signal −50 −40 −30 −20 −10 0 10 20 30 40 50 0 10 20 30 40 50 60 ||i−j||2 cov(||i−j||2) Diagonal Exponential Sq. exponential (b) Covariance functions 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Undersamplingsratio N/D NMSE Oracle LS LARS BG−AMP EP, Diagonal EP, Exponential EP, Sq. exponential (c) NMSE 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Undersamplingsratio N/D F Oracle LS LARS BG−AMP EP, Diagonal EP, Exponential EP, Sq. exponential (d) F-measure 0 0.2 0.4 0.6 0.8 1 0 0.5 1 1.5 2 2.5 3 3.5 Undersamplingsratio N/D Second Oracle LS LARS BG−AMP EP, Diagonal EP, Exponential EP, Sq. exponential (e) Run times 0 0.2 0.4 0.6 0.8 1 0 50 100 150 200 250 300 Undersamplingsratio N/D Iterations EP, Diagonal EP, Exponential EP, Sq. exponential (f) Iterations Figure 3: Illustration of the benefit of modelling the additional structure of the sparsity pattern. 100 problem instances are generated using the linear measurement model y = Ax + e, where elements of A ∈RN×250 are i.i.d Gaussian and the columns are scaled to unit ℓ2-norm. The solutions x0 are sampled from the prior in eq. (5) with hyperparameters Σij = 50 exp −||i −j||2 / 2 · 102 and a fixed level of sparsity of K/D = 0.25. For EP methods, the Σ0 matrix is approximated using a rank 50 matrix. SNR is fixed at 20dB. of the signal such that F ≈0.8, EP with diagonal covariance and BG-AMP need an undersamplingratio of N/D ≈0.55, while the EP methods with neighbour correlation only need N/D ≈0.35 to achieve F ≈0.8. For this specific problem, this means that utilizing the neighbourhood structure allows us to reconstruct the signal with 50 fewer observations. Note that, the reconstruction using the exponential covariance function does also improve the result even if the true underlying covariance structure corresponds to a squared exponential function. Furthermore, we see similar performance of BG-AMP and EP with a diagonal covariance matrix. This is expected for problems where Aij is drawn iid as assumed in BG-AMP. However, the price of the improved phase transition is clear from figure 3(e). The proposed algorithm has significantly higher computational complexity than BG-AMP and LARS. Figure 4(a) shows the posterior mean of z for the signal shown in figure 3(a). Here it is seen that the two models with neighbour correlation provide a better approximation to the posterior activation probabilities. Figure 4(b) shows the posterior mean of γ for the model with the squared exponential kernel along with ± one standard deviation. 4.2 Experiment 2 In this experiment we consider an application of the MMV formulation as given in eq. (8)(10), namely EEG source localization with synthetic sources [22]. Here we are interested in localizing the active sources within a specific region of interest on the cortical surface (grey area on figure 5(a)). To do this, we now generate a problem instance of Y = AEEGX0 + E using the procedure as described in experiment 1, where AEEG ∈R128×800 is now a submatrix of a real EEG forward matrix corresponding to the grey area on the figure. The condition number of AEEG is ≈8·1015. The true sources X0 ∈R800×20 are sampled from the structured spike and slab prior in eq. (8) using a squared exponential kernel with parameters A = 50, s = 10 and T = 20. The number of active sources is 46, i.e. x has 46 non-zero rows. SNR is fixed to 20dB. The true sources are shown in figure 5(a). We now use the EP algorithm to recover the sources using the true prior, i.e. squared exponential kernel and 7 20 40 60 80 100 120 140 160 180 200 220 240 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Signal index p(zi = 1|y) True support EP, Diag EP, Exp. EP, Sq. exp (a) 50 100 150 200 250 −10 −5 0 5 Signal index γi|y ± 1 standard deviation Posterior mean of γ for sq. exp. (b) Figure 4: (a) Marginal posterior means over z obtained using the structured spike and slab model for the signal in figure 3(a). The experiment set-up is the as described in figure 3, except the undersamplingsratio is fixed to N/D = 0.5. (b) The posterior mean of γ superimposed with ± one standard deviation. The green dots indicate the true support. (a) True sources (b) EP, Sq. exponential (c) EP, Diagonal Figure 5: Source localization using synthetic sources. The A ∈R128×800 is a submatrix (grey area) of a real EEG forward matrix. (a) True sources. (b) Reconstruction using the true prior , Fsq = 0.78. (c) Reconstruction using a diagonal covariance matrix, Fdiag = 0.34. the results are shown in figure 5(b). We see that the algorithm detects most of the sources correctly, even the small blob on the right hand side. However, it also introduces a small number of false positives in the neighbourhood of the true active sources. The resulting F-measure is Fsq = 0.78. Figure 5(c) shows the result of reconstructing the sources using a diagonal covariance matrix, where Fdiag = 0.34. Here the BG-AMP algorithm is expected to perform poorly due to the heavy violation of the assumption of Aij being Gaussian iid. 4.3 Experiment 3 We have also recreated the Shepp-Logan Phantom experiment from [2] with D = 104 unknowns, K = 1723 non-zero weights, N = 2K observations and SNR = 10dB (see supplementary material for more details). The EP method yields Fsq = 0.994 and NMSEsq = 0.336 for this experiment, whereas BG-AMP yields F = 0.624 and NMSE = 0.717. For reference, the oracle estimator yields NMSE = 0.326. 5 Conclusion and outlook We introduced the structured spike and slab model, which allows incorporation of a priori knowledge of the sparsity pattern. We developed an expectation propagation-based algorithm for Bayesian inference under the proposed model. Future work includes developing a scheme for learning the structure of the sparsity pattern and extending the algorithm to the multiple measurement vector formulation with slowly changing support. 8 References [1] R. Jenatton, G. Obozinski, and F. Bach. Structured sparse principal component analysis. In AISTATS, pages 366–373, 2010. [2] V. Cevher, M. F. Duarte, C. Hegde, and R. G. Baraniuk. Sparse signal recovery using markov random fields. In NIPS, Vancouver, B.C., Canada, 8–11 December 2008. [3] M. Pontil, L. Baldassarre, and J. Mouro-Miranda. Structured sparsity models for brain decoding from fMRI data. Proceedings - 2012 2nd International Workshop on Pattern Recognition in NeuroImaging, PRNI 2012, pages 5–8, 2012. [4] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the royal statistical society series b-methodological, 58(1):267–288, 1996. [5] T. J. Mitchell and J. Beauchamp. Bayesian variable selection in linear-regression. Journal of the American Statistical Association, 83(404):1023–1032, 1988. [6] N. Simon, J. Friedman, T. Hastie, and R. Tibshirani. A sparse-group lasso. Journal Of Computational And Graphical Statistics, 22(2):231–245, 2013. [7] G. Obozinski, J. P. Vert, and L. Jacob. Group lasso with overlap and graph lasso. ACM International Conference Proceeding Series, 382:–, 2009. [8] D. Hernandez-Lobato, J. Hernandez-Lobato, and P. Dupont. Generalized spike-andslab priors for bayesian group feature selection using expectation propagation. Journal Of Machine Learning Research, 14:1891–1945, 2013. [9] L. Yu, H. Sun, J. P. Barbot, and G. Zheng. Bayesian compressive sensing for cluster structured sparse signals. Signal Processing, 92(1):259 – 269, 2012. [10] M. Van Gerven, B. Cseke, R. Oostenveld, and T. Heskes. Bayesian source localization with the multivariate laplace prior. In Y. Bengio, D. Schuurmans, J.D. Lafferty, C.K.I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22, pages 1901–1909. Curran Associates, Inc., 2009. [11] C. E. Rasmussen and C. K. I. Williams. Gaussian processes for machine learning. MIT Press, 2006. [12] S. F. Cotter, B. D. Rao, K. Engan, and K. Kreutz-delgado. Sparse solutions to linear inverse problems with multiple measurement vectors. IEEE Trans. Signal Processing, pages 2477–2488, 2005. [13] D. P. Wipf and B. D. Rao. An empirical bayesian strategy for solving the, simultaneous sparse approximation problem. IEEE Transactions On Signal Processing, 55(7):3704– 3716, 2007. [14] J. Ziniel and P. Schniter. Dynamic compressive sensing of time-varying signals via approximate message passing. IEEE Transactions On Signal Processing, 61(21):5270– 5284, 2013. [15] T. Minka. Expectation propagation for approximate bayesian inference. In Proceedings of the Seventeenth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-01), pages 362–369, San Francisco, CA, 2001. Morgan Kaufmann. [16] M. Opper and O. Winther. Gaussian processes for classification: Mean-field algorithms. Neural Computation, 12(11):2655–2684, 2000. [17] C. M. Bishop. Pattern recognition and machine learning. Springer, 2006. [18] K. B. Petersen and M. S. Pedersen. The matrix cookbook. 2012. [19] M. E Tipping and C. M. Bishop. Probabilistic principal component analysis. Journal of the Royal Statistical Society, Series B, 61:611–622, 1999. [20] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. Annals of Statistics, 32:407–499, 2004. [21] P. Schniter and J. Vila. Expectation-maximization gaussian-mixture approximate message passing. 2012 46th Annual Conference on Information Sciences and Systems, CISS 2012, pages –, 2012. [22] S. Baillet, J. C. Mosher, and R. M. Leahy. Electromagnetic brain mapping. IEEE Signal Processing Magazine, 18(6):14–30, 2001. 9
|
2014
|
181
|
5,271
|
Just-In-Time Learning for Fast and Flexible Inference S. M. Ali Eslami, Daniel Tarlow, Pushmeet Kohli and John Winn Microsoft Research {alie,dtarlow,pkohli,jwinn}@microsoft.com Abstract Much of research in machine learning has centered around the search for inference algorithms that are both general-purpose and efficient. The problem is extremely challenging and general inference remains computationally expensive. We seek to address this problem by observing that in most specific applications of a model, we typically only need to perform a small subset of all possible inference computations. Motivated by this, we introduce just-in-time learning, a framework for fast and flexible inference that learns to speed up inference at run-time. Through a series of experiments, we show how this framework can allow us to combine the flexibility of sampling with the efficiency of deterministic message-passing. 1 Introduction We would like to live in a world where we can define a probabilistic model, press a button, and get accurate inference results within a matter of seconds or minutes. Probabilistic programming languages allow for the rapid definition of rich probabilistic models to this end, but they also raise a crucial question: what algorithms can we use to efficiently perform inference for the largest possible set of programs in the language? Much of recent research in machine learning has centered around the search for inference algorithms that are both flexible and efficient. The general inference problem is extremely challenging and remains computationally expensive. Sampling based approaches (e.g. [5, 19]) can require many evaluations of the probabilistic program to obtain accurate inference results. Message-passing based approaches (e.g. [12]) are typically faster, but require the program to be expressed in terms of functions for which efficient messagepassing operators have been implemented. However, implementing a message-passing operator for a new function either requires technical expertise, or is computationally expensive, or both. In this paper we propose a solution to this problem that is automatic (it doesn’t require the user to build message passing operators) and efficient (it learns from past experience to make future computations faster). The approach is motivated by the observation that general algorithms are solving problems that are harder than they need to be: in most specific inference problems, we only ever need to perform a small subset of all possible message-passing computations. For example, in Expectation Propagation (EP) the range of input messages to a logistic factor, for which it needs to compute output messages, is highly problem specific (see Fig. 1a). This observation raises the central question of our work: can we automatically speed up the computations required for general message-passing, at run-time, by learning about the statistics of the specific problems at hand? Our proposed framework, which we call just-in-time learning (JIT learning), initially uses highly general algorithms for inference. It does so by computing messages in a message-passing algorithm using Monte Carlo sampling, freeing us from having to implement hand-crafted message update operators. However, it also gradually learns to increase the speed of these computations by regressing from input to output messages (in a similar way to [7]) at run-time. JIT learning enables us to combine the flexibility of sampling (by allowing arbitrary factors) and the speed of hand-crafted message-passing operators (by using regressors), without having to do any pre-training. This constitutes our main contribution and we describe the details of our approach in Sec. 3. 1 −40 −30 −20 −10 0 10 −4 −2 0 2 4 6 8 Mean Log precision banknote_authentication blood_transfusion ionosphere fertility_diagnosis (a) Problem-specific variation −10 −5 0 5 10 −0.5 0 0.5 1 Training datapoints Forest predictions (b) Random forest uncertainty yavg i yi ymax i ai topt i ti xi f seed f soil Yield Noise Eval + Eval GP GP (c) Figure 1: (a) Parameters of Gaussian messages input to a logistic factor in logistic regression vary significantly in four random UCI datasets. (b) Figure for Sec. 4: A regression forest performs 1D regression (1,000 trees, 2 feature samples per node, maximum depth 4, regressor polynomial degree 2). The red shaded area indicates one standard deviation of the predictions made by the different trees in the forest, indicating its uncertainty. (c) Figure for Sec. 6: The yield factor relates temperatures and yields recorded at farms to the optimal temperatures of their planted grain. JIT learning enables us to incorporate arbitrary factors with ease, whilst maintaining inference speed. Our implementation relies heavily on the use of regressors that are aware of their own uncertainty. Their awareness about the limits of their knowledge allows them to decide when to trust their predictions and when to fall back to computationally intensive Monte Carlo sampling (similar to [8] and [9]). We show that random regression forests [4] form a natural and efficient basis for this class of ‘uncertainty aware’ regressors and we describe how they can be modified for this purpose in Sec. 4. To the best of our knowledge this is the first application of regression forests to the self-aware learning setting and it constitutes our second contribution. To demonstrate the efficacy of the JIT framework, we employ it for inference in a variety of graphical models. Experimental results in Sec. 6 show that for general graphical models, our approach leads to significant improvements in inference speed (often several orders of magnitude) over importance sampling whilst maintaining overall accuracy, even boosting performance for models where hand designed EP message-passing operators are available. Although we demonstrate JIT learning in the context of expectation propagation, the underlying ideas are general and the framework can be used for arbitrary inference problems. 2 Background A wide class of probabilistic models can be represented using the framework of factor graphs. In this context a factor graph represents the factorization of the joint distribution over a set of random variables x = {x1, ..., xV } via non-negative factors ψ1, ..., ψF given by p(x) = Q f ψf(xne(ψf ))/Z, where xne(ψf ) is the set of variables that factor ψf is defined over. We will focus on directed factors of the form ψ(xout|xin) which directly specify the conditional density over the output variables xout as a function of the inputs xin, although our approach can be extended to factors of arbitrary form. Belief propagation (or sum-product) is a message-passing algorithm for performing inference in factor graphs with discrete and real-valued variables, and it includes sub-routines that compute variableto-factor and factor-to-variable messages. The bottleneck is mainly in computing the latter kind, as they often involve intractable integrals. The message from factor ψ to variable i is: mψ→i(xi) = Z x−i ψ(xout|xin) Y k∈ne(ψ)\i mk→ψ(xk), (1) where x−i denotes all random variables in xne(ψ) except i. To further complicate matters, the messages are often not even representable in a compact form. Expectation Propagation [11] extends the applicability of message-passing algorithms by projecting messages back to a pre-determined, tractable family distribution: mψ→i(xi) = proj hR x−i ψ(xout|xin) Q k∈ne(ψ) mk→ψ(xk) i mi→ψ(xi) . (2) 2 The proj[·] operator ensures that the message is a distribution of the correct type and only has an effect if its argument is outside the approximating family used for the target message. The integral in the numerator of Eq. 2 can be computed using Monte Carlo methods [2, 7], e.g. by using the generally applicable technique of importance sampling. After multiplying and dividing by a proposal distribution q(xin) we get: mψ→i(xi) ≡proj "Z x−i v(xin, xout) · w(xin, xout) # /mi→ψ(xi), (3) where v(xin, xout) = q(xin)ψ(xout|xin) and w(xin, xout) = Q k∈ne(ψ) mk→ψ(xk)/q(xin). Therefore mψ→i(xi) ≃proj P s w(xs in, xs out)δ(xi) P s w(xs in, xs out) /mi→ψ(xi), (4) where xs in and xs out are samples from v(xin, xout). To sample from v, we first draw values xs in from q then pass them through the forward-sampling procedure defined by ψ to get a value for xs out. Crucially, note that we require no knowledge of ψ other than the ability to sample from ψ(xout|xin). This allows the model designer to incorporate arbitrary factors simply by providing an implementation of this forward sampler, which could be anything from a single line of deterministic code to a large stochastic image renderer. However, drawing a single sample from ψ can itself be a timeconsuming operation, and the complexity of ψ and the arity of xin can both have a dramatic effect on the number of samples required to compute messages accurately. 3 Just-in-time learning of message mappings Monte Carlo methods (as defined above) are computationally expensive and can lead to slow inference. In this paper, we adopt an approach in which we learn a direct mapping, parameterized by θ, from variable-to-factor messages {mk→ψ}k∈ne(ψ) to a factor-to-variable message mψ→i: mψ→i(xi) ≡f({mk→ψ}k∈ne(ψ)|θ). (5) Using this direct mapping function f, factor-to-variable messages can be computed in a fraction of the time required to perform full Monte Carlo estimation. Heess et al. [7] recently used neural networks to learn this mapping offline for a broad range of input message combinations. Motivated by the observation that the distribution of input messages that a factor sees is often problem specific (Fig. 1a), we consider learning the direct mapping just-in-time in the context of a specific model. For this we employ ‘uncertainty aware’ regressors. Along with each prediction m, the regressor produces a scalar measure u of its uncertainty about that prediction: uψ→i ≡u({mk→ψ}k∈ne(ψ)|θ). (6) We adopt a framework similar to that of uncertainty sampling [8] (also [9]) and use these uncertainties at run-time to choose between the regressor’s estimate and slower ‘oracle’ computations: mψ→i(xi) = ( mψ→i(xi) uψ→i < umax moracle ψ→i(xi) otherwise (7) where umax is the maximum tolerated uncertainty for a prediction. In this paper we consider importance sampling or hand-implemented Infer.NET operators as oracles however other methods such as MCMC-based samplers could be used. The regressor is updated after every oracle consultation in order to incorporate the newly acquired information. An appropriate value for umax can be found by collecting a small number of Monte Carlo messages for the target model offline: the uncertainty aware regressor is trained on some portion of the collected messages, and evaluated on the held out portion, producing predictions mψ→i and confidences uψ→i for every held out message. We then set umax such that no held out prediction has an error above a user-specified, problem-specific maximum tolerated value Dmax. A natural choice for this error measure is mean squared error of the parameters of the messages (e.g. natural parameters for the exponential family), however this is sensitive to the particular parameterization chosen for the target distribution type. Instead, for each pair of predicted and oracle messages 3 from factor ψ to variable i, we calculate the marginals bi and boracle i they each induce on the target random variable, and compute the Kullback-Leibler (KL) divergence between the two: Dmar KL (mψ→i∥moracle ψ→i) ≡DKL(bi∥boracle i ), (8) where bi = mψ→i · mi→ψ and boracle i = moracle ψ→i · mi→ψ, using the fact that beliefs can be computed as the product of incoming and outgoing messages on any edge. We refer to the error measure Dmar KL as marginal KL and use it throughout the JIT framework, as it encourages the system to focus efforts on the quantity that is ultimately of interest: the accuracy of the posterior marginals. 4 Random decision forests for JIT learning We wish to learn a mapping from a set of incoming messages {mk→ψ}k∈ne(ψ) to the outgoing message mψ→i. Note that separate regressors are trained for each outgoing message. We require that the regressor: 1) trains and predicts efficiently, 2) can model arbitrarily complex mappings, 3) can adapt dynamically, and 4) produces uncertainty estimates. Here we describe how decision forests can be modified to satisfy these requirements. For a review of decision forests see [4]. In EP, each incoming and outgoing message can be represented using only a few numbers, e.g. a Gaussian message can be represented by its natural parameters. We refer to the outgoing message by mout and to the set of incoming messages by min. Each set of incoming messages min is represented in two ways: the first, a concatenation of the parameters of its constituent messages which we call the ‘regression parameterization’ and denote by rin; and the second, a vector of features computed on the set which we call the ‘tree parameterization’ and denote by tin. This tree parametrization typically contains values for a larger number of properties of each constituent message (e.g. parameters and moments), and also properties of the set as a whole (e.g. ψ evaluated at the mode of min). We represent the outgoing message mout by a vector of real valued numbers rout. Note that din and dout, the number of elements in rin and rout respectively, need not be equal. Weak learner model. Data arriving at a split node j is separated into the node’s two children according to a binary weak learner h(tin, τ j) ∈{0, 1}, where τ j parameterizes the split criterion. We use weak learners of the generic oriented hyperplane type throughout (see [4] for details). Prediction model. Each leaf node is associated with a subset of the labelled training data. During testing, a previously unseen set of incoming messages traverses the tree until it reaches a leaf which by construction is likely to contain similar training examples. We therefore use the statistics of the data gathered in that leaf to predict outgoing messages with a multivariate polynomial regression model of the form: rtrain out = W · φn(rtrain in ) + ϵ, where φn(·) is the n-th degree polynomial basis function, and ϵ is the dout-dimensional vector of normal error terms. We use the learned dout × dindimensional matrix of coefficients W at test time to make predictions rout for each rin. To recap, tin is used to traverse message sets down to leaves, and rin is used by the linear regressor to predict rout. Training objective function. The optimization of the split functions proceeds in a greedy manner. At each node j, depending on the subset of the incoming training set Sj we learn the function that ‘best’ splits Sj into the training sets corresponding to each child, SL j and SR j , i.e. τ j = argmaxτ∈Tj I(Sj, τ). This optimization is performed as a search over a discrete set Tj of a random sample of possible parameter settings. The number of elements in Tj is typically kept small, introducing random variation in the different trees in the forest. The objective function I is: I(Sj, τ) = −E(SL j , WL) −E(SR j , WR), (9) where WL and WR are the parameters of the polynomial regression models corresponding to the left and right training sets SL j and SR j , and the ‘fit residual’ E is: E(S, W) = 1 2 X min∈S Dmar KL (mW min∥moracle min ) + Dmar KL (moracle min ∥mW min). (10) Here min is a set of incoming messages in S, moracle min is the oracle outgoing message, mW min is the estimate produced by the regression model specified by W and Dmar KL is the marginal KL. In simple terms, this objective function splits the training data at each node in a way that the relationship between the incoming and outgoing messages is well captured by the polynomial regression in each child, as measured by symmetrized marginal KL. 4 Ensemble model. A key aspect of forests is that their trees are randomly different from each other. This is due to the relatively small number of weak learner candidates considered in the optimization of the weak learners. During testing, each test point min simultaneously traverses all trees from their roots until it reaches their leaves. Combining the predictions into a single forest prediction may be done by averaging the parameters rt out of the predicted outgoing messages mt out by each tree t, however again this would be sensitive to the parameterizations of the output distribution types. Instead, we compute the moment average mout of the distributions {mt out} by averaging the first few moments of each predicted distribution across trees, and solving for the distribution parameters which match the averaged moments. Grosse et al. [6] study the characteristics of the moment average in detail, and have showed that it can be interpreted as minimizing an objective function mout = argminm U({mt out}, m) where U({mt out}, m) = P t DKL(mt out∥m). Intuitively, the level of agreement between the predictions of the different trees can be used as a proxy of the forest’s uncertainty about that prediction (we choose not to use uncertainty within leaves in order to maintain high prediction speed). If all the trees in the forest predict the same output distribution, it means that their knowledge about the function f is similar despite the randomness in their structures. We therefore set uout ≡U({mt out}, mout). A similar notion is used for classification forests, where the entropy of the aggregate output histogram is used as a proxy of the classification’s uncertainty [4]. We illustrate how this idea extends to simple regression forests in Fig. 1b, and in Sec. 6 we also show empirically that this uncertainty measure works well in practice. Online training. During learning, the trees periodically obtain new information in the form of (min, moracle out ) pairs. The forest makes use of this by pushing min down a portion 0 < ρ ≤1 of the trees to their leaf nodes and retraining the regressors at those leaves. Typically ρ = 1, however we use values smaller than 1 when the trees are shallow (due to the mapping function being captured well by the regressors at the leaves) and the forest’s randomness is too low to produce reliable uncertainty estimates. If the regressor’s fit residual E at a leaf (Eq. 10) is above a user-specified threshold value Emax leaf , a split is triggered on that node. Note that no depth limit is ever specified. 5 Related work There are a number of works in the literature that consider using regressors to speed up general purpose inference algorithms. For example, the Inverse MCMC algorithm [20] uses discriminative estimates of local conditional distributions to make proposals for a Metropolis-Hastings sampler, however these predictors are not aware of their own uncertainty. Therefore the decision of when the sampler can start to rely on them needs to be made manually and the user has to explicitly separate offline training and test-time inference computations. A related line of work is that of inference machines [14, 15, 17, 13]. Here, message-passing is performed by a sequence of predictions, where the sequence itself is defined by the graphical model. The predictors are jointly trained to ensure that the system produces correct labellings, however the resulting inference procedure no longer corresponds to the original (or perhaps to any) graphical model and therefore the method is unsuitable if we care about querying the model’s latent variables. The closest work to ours is [7], in which Heess et al. use neural networks to learn to pass EP messages. However, their method requires the user to anticipate the set of messages that will ever be sent by the factor ahead of time (itself a highly non-trivial task), and it has no notion of confidence in its predictions and therefore it will silently fail when it sees unfamiliar input messages. In contrast the JIT learner trains in the context of a specific model thereby allocating resources more efficiently, and because it knows what it knows, it buys generality without having to do extensive pre-training. 6 Experiments We first analyze the behaviour of JIT learning with diagnostic experiments on two factors: logistic and compound gamma, which were also considered by [7]. We then demonstrate its application to a challenging model of US corn yield data. The experiments were performed using the extensible factor API in Infer.NET [12]. Unless stated otherwise, we use default Infer.NET settings (e.g. for message schedules and other factor implementations). We set the number of trees in each forest to 64 and use quadratic regressors. Message parameterizations and graphical models, experiments on a product factor and a quantitative comparison with [7] can be found in the supplementary material. 5 −20 −18 −16 −14 −12 −10 −8 0 5 10 15 20 25 Log marginal KL Count (a) Inference error −10 0 10 0 0.2 0.4 0.6 Hold out worst 1 Groundtruth − µ: −3.4, σ2: 6.8 Predicted − µ: −3.3, σ2: 6.5 Log marginal KL: −8.2 Log uncertainty: −7.8 −10 0 10 0 0.2 0.4 0.6 Hold out worst 2 Groundtruth − µ: −3.4, σ2: 6.8 Predicted − µ: −3.3, σ2: 6.6 Log marginal KL: −8.6 Log uncertainty: −8.2 (b) Worst predicted messages −25 −20 −15 −10 −5 −18 −16 −14 −12 −10 −8 −6 Log marginal KL Log uncertainty Train Hold out (c) Awareness of uncertainty Figure 2: Uncertainty aware regression. All plots for the Gaussian forest. (a) Histogram of marginal KLs of outgoing messages, which are typically very small. (b) The forest’s most inaccurate predictions (black: moracle, red: m, dashed black: boracle, purple: b). (c) The regressor’s uncertainty increases in tandem with marginal KL, i.e. it does not make confident but inaccurate predictions. 50 100 150 200 250 300 350 400 450 500 0 0.05 0.1 0.15 0.2 0.25 Problems seen Oracle consultation rate Infer.NET + KNN Infer.NET + JIT Sampling + KNN Sampling + JIT (a) Oracle consultation rate 50 100 150 200 250 300 350 400 450 500 6 7 8 9 10 11 12 Problems seen Log time (ms) Infer.NET Infer.NET + KNN Infer.NET + JIT Sampling Sampling + KNN Sampling + JIT (b) Inference time 50 100 150 200 250 300 350 400 450 500 −18 −16 −14 −12 −10 Problems seen Log KL of inferred weight posterior Infer.NET + KNN Infer.NET + JIT Sampling Sampling + KNN Sampling + JIT (c) Inference error Figure 3: Logistic JIT learning. (a) The factor consults the oracle for only a fraction of messages, (b) leading to significant savings in time, (c) whilst maintaining (or even decreasing) inference error. Logistic. We have access to a hand-crafted EP implementation of this factor, allowing us to perform quantitative analysis of the JIT framework’s performance. The logistic deterministically computes xout = σ(xin) = 1/(1+exp{−xin}). Sensible choices for the incoming and outgoing message types are Gaussian and Beta respectively. We study the logistic factor in the context of Bayesian logistic regression models, where the relationship between an input vector x and a binary output observation y is modeled as p(y = 1) = σ(wT x). We place zero-mean, unit-variance Gaussian priors on the entries of regression parameters w, and run EP inference for 10 iterations. We first demonstrate that the forests described in Sec. 4 are fast and accurate uncertainty aware regressors by applying them to five synthetic logistic regression ‘problems’ as follows: for each problem, we sample a groundtruth w and training xs from N(0, 1) and then sample their corresponding ys. We use a Bayesian logistic regression model to infer ws using the training datasets and make predictions on the test datasets, whilst recording the messages that the factor receives and sends during both kinds of inference. We split the observed message sets into training (70%) and hold out (30%), and train and evaluate the random forests using the two datasets. In Fig. 2 we show that the regressor is accurate and that it is uncertain whenever it makes predictions with higher error. One useful diagnostic for choosing the various parameters of the forests (including choice of parametrization for rin and tin, as well leaf tolerance Emax leaf ) is the average utilization of its leaves during held out prediction, i.e. what fraction of leaves are visited at test time. In this experiment the forests obtain an average utilization of 1, meaning that every leaf contributes to the predictions of the 30% held out data, thereby indicating that the forests have learned a highly compact representation of the underlying function. As described in Sec. 3, we also use the data gathered in this experiment to find an appropriate value of umax for use in just-in-time learning. Next we evaluate the uncertainty aware regressor in the context of JIT learning. We present several related regression problems to a JIT logistic factor, i.e. we keep w fixed and generate multiple new {(x, y)} sets. This is a natural setting since often in practice we observe multiple datasets which we believe to have been generated by the same underlying process. For each problem, using the JIT factor we infer the regression weights and make predictions on test inputs, comparing wall-clock time and accuracy with non-JIT implementations of the factor. We consider two kinds of oracles: 6 those that consult Infer.NET’s message operators and those that use importance sampling (Eq. 4). As a baseline, we also implemented a K-nearest neighbour (KNN) uncertainty aware regressor. Here, messages are represented using their natural parameters, the uncertainty associated with each prediction is the mean distance from the K-closest points in this space, and the outgoing message’s parameters are found by taking the average of the parameters of the K-closest output messages. We use the same procedure as the one described in Sec. 3 to choose umax for KNN. We observe that the JIT factor does indeed learn about the inference problem over time. Fig. 3a shows that the rate at which the factor consults the oracle decreases over the course of the experiment, reaching zero at times (i.e. for these problems the factor relies entirely on its predictions). On average, the factor sends 97.7% of its messages without consulting the sampling oracle (a higher rate of 99.2% when using Infer.NET as the oracle, due to lack of sampling noise), which leads to several orders of magnitude savings in inference time (from around 8 minutes for sampling to around 800 ms for sampling + JIT), even increasing the speed of our Infer.NET implementation (from around 1300 ms to around 800 ms on average, Fig. 3b). Note that the forests are not merely memorising a mapping from input to output messages, as evidenced by the difference in the consultation rates of JIT and KNN, and that KNN speed deteriorates as the database grows. Surprisingly, we observe that the JIT regressors in fact decrease the KL between the results produced by importance sampling and Infer.NET, thereby increasing overall inference accuracy (Fig. 3c, this could be due to the fact that the regressors at the leaves of the forests smooth out the noise of the sampled messages). Reducing the number of importance samples to reach speed parity with JIT drastically degrades the accuracy of the outgoing messages, increasing overall log KL error from around −11 to around −4. Compound gamma. The second factor we investigate is the compound gamma factor. The compound gamma construction is used as a heavy-tailed prior over precisions of Gaussian random variables, where first r2 is drawn from a gamma with rate r1 and shape s1 and the precision of the Gaussian is set to be a draw from a gamma with rate r2 and shape s2. Here, we have access to closed-form implementations of the two gamma factors in the construction, however we use the JIT framework to collapse the two into a single factor for increased speed. We study the compound gamma factor in the context of Gaussian fitting, where we sample a random number of points from multiple Gaussians with a wide range of precisions, and then infer the precision of the generating Gaussians via Bayesian inference using a compound gamma prior. The number of samples varies between 10 and 100 and the precision varies between 10−4 and 104 in each problem. The compound factor learns the message mapping after around 20 problems (see Fig. 4a). Note that only a single message is sent by the factor in each episode, hence the abrupt drop in inference time. This increase in performance comes at negligible loss of accuracy (Figs. 4b, 4c). Yield. We also consider a more realistic application to scientific modelling. This is an example of a scenario for which our framework is particularly suited: scientists often need to build large models with factors that directly take knowledge about certain components of the problem into account. We use JIT learning to implement a factor that relates agriculture yields to temperature in the context of an ecological climate model. Ecologists have strong empirical beliefs about the form of the relationship between temperature and yield (that yield increases gradually up to some optimal temperature but drops sharply after that point; see Fig 5a and [16, 10]) and it is imperative that this relationship is modelled faithfully. Deriving closed form message-operators is a non-trivial task, and therefore current state-of-the-art is sampling-based (e.g. [3]) and highly computationally intensive. 10 20 30 40 50 60 70 80 90 100 0 2 4 6 8 Problems seen Log time (ms) Infer.NET Infer.NET + KNN Infer.NET + JIT Sampling Sampling + KNN Sampling + JIT (a) Inference time 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0 0.2 0.4 0.6 0.8 1 Distance d of inferred log precision from groundtruth Ratio of inferred precisions with error < d Infer.NET Infer.NET + JIT Sampling Sampling (matching JIT speed) Sampling + JIT (b) Inference error −10 −5 0 5 10 −10 −5 0 5 10 Sampling inferred log precision Sampling + JIT inferred log precision (c) Accuracy (1 dot per problem) Figure 4: Compound gamma JIT learning. (a) JIT reduces inference time for sampling from ∼11 seconds to ∼1 ms. (b) JIT s posteriors agree highly with Infer.NET. Using fewer samples to match JIT speed leads to degradation of accuracy. (c) Increased speed comes at negligible loss of accuracy. 7 0 5 10 tOpt 20 25 30 35 40 0 50 100 150 200 yMax Yield (bushels/acre) Temperature (celcius) (a) The yield factor 0 2000 4000 6000 8000 10000 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Message number Oracle consultation rate 2011 z }| { 2012 z }| { 2013 z }| { (b) Oracle consultation rate −60 −40 −20 0 20 40 −60 −40 −20 0 20 40 Sampling inferred county ability (ai) Sampling + JIT inferred county ability (ai) (c) Accuracy (1 dot per county) Figure 5: A probabilistic model of corn yield. (a) Ecologists believe that yield increases gradually up to some optimal temperature but drops sharply after that point [16, 10], and they wish to incorporate this knowledge into their models faithfully. (b) Average consultation rate per 1,000 messages over the course of inference on the three datasets. Notice decrease within and across datasets. (c) Significant savings in inference time (Table 1) come at a small cost in inference accuracy. We obtain yield data for 10% of US counties for 2011–2013 from the USDA National Agricultural Statistics Service [1] and corresponding temperature data using [18]. We first demonstrate that it is possible to perform inference in a large-scale ecological model of this kind with EP (graphical model shown in Fig. 1c; derived in collaboration with computational ecologists; see supplementary material for a description), using importance sampling to compute messages for the yield factor for which we lack message-passing operators. In addition to the difficulty of computing messages for the multidimensional yield factor, inference in the model is challenging as it includes multiple Gaussian processes, separate topt and ymax variables for each location, many copies of the yield factor, and its graph is loopy. Results of inference are shown in the supplementary material. We find that with around 100,000 samples the message for the yield factor can be computed accurately, making these by far the slowest computations in the inference procedure. We apply JIT learning by regressing these messages instead. The high arity of the factor makes the task particularly challenging as it increases the complexity of the mapping function being learned. Despite this, we find that when performing inference on the 2011 data the factor can learn to accurately send up to 54% of messages without having to consult the oracle, resulting in a speedup of 195%. IS JIT fresh JIT continued Time FR Speedup FR Speedup 11 451s 54% 195% — — 12 449s 54% 192% 60% 288% 13 451s 54% 191% 64% 318% Table 1: FR is fraction of regressions with no oracle consultation. A common scenario is one in which we collect more data and wish to repeat inference. We use the forests learned at the end of inference on 2011 data to perform inference on 2012 data, and the forests learned at the end of this to do inference on 2013 data, and compare to JIT learning from scratch for each dataset. The factor transfers its knowledge across the problems, increasing inference speedup from 195% to 289% and 317% in the latter two experiments respectively (Table 1), whilst maintaining overall inference accuracy (Fig. 5c). 7 Discussion The success of JIT learning depends heavily on the accuracy of the regressor and its knowledge about its uncertainty. Random forests have shown to be adequate however alternatives may exist, and a more sophisticated estimate of uncertainty (e.g. using Gaussian processes) is likely to lead to an increased rate of learning. A second critical ingredient is an appropriate choice of umax, which currently requires a certain amount of manual tuning. In this paper we showed that it is possible to speed up inference by combining EP, importance sampling and JIT learning, however it will be of interest to study other inference settings where JIT ideas might be applicable. Surprisingly, our experiments also showed that JIT learning can increase the accuracy of sampling or accelerate hand-coded message operators, suggesting that it will be fruitful to use JIT to remove bottlenecks even in existing, optimized inference code. Acknowledgments Thanks to Tom Minka and Alex Spengler for valuable discussions, and to Silvia Caldararu and Drew Purves for introducing us to the corn yield datasets and models. 8 References [1] National Agricultural Statistics Service, 2013. United States Department of Agriculture. http://quickstats.nass.usda.gov/. [2] Simon Barthelm´e and Nicolas Chopin. ABC-EP: Expectation Propagation for Likelihoodfree Bayesian Computation. In Proceedings of the 28th International Conference on Machine Learning, pages 289–296, 2011. [3] Silvia Caldararu, Vassily Lyutsarev, Christopher McEwan, and Drew Purves. Filzbach, 2013. Microsoft Research Cambridge. Website URL: http://research.microsoft.com/enus/projects/filzbach/. [4] Antonio Criminisi and Jamie Shotton. Decision Forests for Computer Vision and Medical Image Analysis. Springer Publishing Company, Incorporated, 2013. [5] Noah D. Goodman, Vikash K. Mansinghka, Daniel Roy, Keith Bonawitz, and Joshua B. Tenenbaum. Church: a language for generative models. In Uncertainty in Artificial Intelligence, 2008. [6] Roger B Grosse, Chris J Maddison, and Ruslan Salakhutdinov. Annealing between distributions by averaging moments. In Advances in Neural Information Processing Systems 26, pages 2769–2777. 2013. [7] Nicolas Heess, Daniel Tarlow, and John Winn. Learning to Pass Expectation Propagation Messages. In Advances in Neural Information Processing Systems 26, pages 3219–3227. 2013. [8] David D. Lewis and William A. Gale. A Sequential Algorithm for Training Text Classifiers. In Special Interest Group on Information Retrieval, pages 3–12. Springer London, 1994. [9] Lihong Li, Michael L. Littman, and Thomas J. Walsh. Knows what it knows: a framework for self-aware learning. In Proceedings of the 25th International Conference on Machine learning, pages 568–575, New York, NY, USA, 2008. ACM. [10] David B. Lobell, Marianne Banziger, Cosmos Magorokosho, and Bindiganavile Vivek. Nonlinear heat effects on African maize as evidenced by historical yield trials. Nature Climate Change, 1:42–45, 2011. [11] Thomas Minka. Expectation Propagation for approximate Bayesian inference. PhD thesis, Massachusetts Institute of Technology, 2001. [12] Thomas Minka, John Winn, John Guiver, and David Knowles. Infer.NET 2.5, 2012. Microsoft Research Cambridge. Website URL: http://research.microsoft.com/infernet. [13] Daniel Munoz. Inference Machines: Parsing Scenes via Iterated Predictions. PhD thesis, The Robotics Institute, Carnegie Mellon University, June 2013. [14] Daniel Munoz, J. Andrew Bagnell, and Martial Hebert. Stacked Hierarchical Labeling. In European Conference on Computer Vision, 2010. [15] Stephane Ross, Daniel Munoz, Martial Hebert, and J. Andrew Bagnell. Learning MessagePassing Inference Machines for Structured Prediction. In Conference on Computer Vision and Pattern Recognition, 2011. [16] Wolfram Schlenker and Michael J. Roberts. Nonlinear temperature effects indicate severe damages to U.S. crop yields under climate change. Proceedings of the National Academy of Sciences, 106(37):15594–15598, 2009. [17] Roman Shapovalov, Dmitry Vetrov, and Pushmeet Kohli. Spatial Inference Machines. In Conference on Computer Vision and Pattern Recognition, pages 2985–2992, 2013. [18] Matthew J. Smith, Paul I. Palmer, Drew W. Purves, Mark C. Vanderwel, Vassily Lyutsarev, Ben Calderhead, Lucas N. Joppa, Christopher M. Bishop, and Stephen Emmott. Changing how Earth System Modelling is done to provide more useful information for decision making, science and society. Bulletin of the American Meteorological Society, 2014. [19] Stan Development Team. Stan: A C++ Library for Probability and Sampling, 2014. [20] Andreas Stuhlm¨uller, Jessica Taylor, and Noah D. Goodman. Learning Stochastic Inverses. In Advances in Neural Information Processing Systems 27, 2013. 9
|
2014
|
182
|
5,272
|
Fast Kernel Learning for Multidimensional Pattern Extrapolation Andrew Gordon Wilson∗ CMU Elad Gilboa∗ WUSTL Arye Nehorai WUSTL John P. Cunningham Columbia Abstract The ability to automatically discover patterns and perform extrapolation is an essential quality of intelligent systems. Kernel methods, such as Gaussian processes, have great potential for pattern extrapolation, since the kernel flexibly and interpretably controls the generalisation properties of these methods. However, automatically extrapolating large scale multidimensional patterns is in general difficult, and developing Gaussian process models for this purpose involves several challenges. A vast majority of kernels, and kernel learning methods, currently only succeed in smoothing and interpolation. This difficulty is compounded by the fact that Gaussian processes are typically only tractable for small datasets, and scaling an expressive kernel learning approach poses different challenges than scaling a standard Gaussian process model. One faces additional computational constraints, and the need to retain significant model structure for expressing the rich information available in a large dataset. In this paper, we propose a Gaussian process approach for large scale multidimensional pattern extrapolation. We recover sophisticated out of class kernels, perform texture extrapolation, inpainting, and video extrapolation, and long range forecasting of land surface temperatures, all on large multidimensional datasets, including a problem with 383,400 training points. The proposed method significantly outperforms alternative scalable and flexible Gaussian process methods, in speed and accuracy. Moreover, we show that a distinct combination of expressive kernels, a fully non-parametric representation, and scalable inference which exploits existing model structure, are critical for large scale multidimensional pattern extrapolation. 1 Introduction Our ability to effortlessly extrapolate patterns is a hallmark of intelligent systems: even with large missing regions in our field of view, we can see patterns and textures, and we can visualise in our mind how they generalise across space. Indeed machine learning methods aim to automatically learn and generalise representations to new situations. Kernel methods, such as Gaussian processes (GPs), are popular machine learning approaches for non-linear regression and classification [1, 2, 3]. Flexibility is achieved through a kernel function, which implicitly represents an inner product of arbitrarily many basis functions. The kernel interpretably controls the smoothness and generalisation properties of a GP. A well chosen kernel leads to impressive empirical performances [2]. However, it is extremely difficult to perform large scale multidimensional pattern extrapolation with kernel methods. In this context, the ability to learn a representation of the data entirely depends on learning a kernel, which is a priori unknown. Moreover, kernel learning methods [4] are not typically intended for automatic pattern extrapolation; these methods often involve hand crafting combinations of Gaussian kernels (for smoothing and interpolation), for specific applications such as modelling low dimensional structure in high dimensional data. Without human intervention, the vast majority of existing GP models are unable to perform pattern discovery and extrapolation. ∗Authors contributed equally. 1 While recent approaches such as [5] enable extrapolation on small one dimensional datasets, it is difficult to generalise these approaches for larger multidimensional situations. These difficulties arise because Gaussian processes are computationally intractable on large scale data, and while scalable approximate GP methods have been developed [6, 7, 8, 9, 10, 11, 12, 13], it is uncertain how to best scale expressive kernel learning approaches. Furthermore, the need for flexible kernel learning on large datasets is especially great, since such datasets often provide more information to automatically learn an appropriate statistical representation. In this paper, we introduce GPatt, a flexible, non-parametric, and computationally tractable approach to kernel learning for multidimensional pattern extrapolation, with particular applicability to data with grid structure, such as images, video, and spatial-temporal statistics. Specifically: • We extend fast Kronecker-based GP inference (e.g., [14, 15]) to account for non-grid data. Our experiments include data where more than 70% of the training data are not on a grid. Indeed most applications where one would want to exploit Kronecker structure involve missing and non-grid data – caused by, e.g., water, government boundaries, missing pixels and image artifacts. By adapting expressive spectral mixture kernels to the setting of multidimensional inputs and Kronecker structure, we achieve exact inference and learning costs of O(PN P +1 P ) computations and O(PN 2 P ) storage, for N datapoints and P input dimensions, compared to the standard O(N 3) computations and O(N 2) storage associated with GPs. • We show that i) spectral mixture kernels (adapted for Kronecker structure); ii) scalable inference based on Kronecker methods (adapted for incomplete grids); and, iii) truly non-parametric representations, when used in combination (to form GPatt) distinctly enable large-scale multidimensional pattern extrapolation with GPs. We demonstrate this through a comparison with various expressive models and inference techniques: i) spectral mixture kernels with arguably the most popular scalable GP inference method (FITC) [10]; ii) a flexible and efficient recent spectral based kernel learning method (SSGP) [6]; and, iii) the most popular GP kernels with Kronecker based inference. • The information capacity of non-parametric methods grows with the size of the data. A truly non-parametric GP must have a kernel that is derived from an infinite basis function expansion. We find that a truly non-parametric representation is necessary for pattern extrapolation on large datasets, and provide insights into this surprising result. • GPatt is highly scalable and accurate. This is the first time, as far as we are aware, that highly expressive non-parametric kernels with in some cases hundreds of hyperparameters, on datasets exceeding N = 105 training instances, can be learned from the marginal likelihood of a GP, in only minutes. Such experiments show that one can, to some extent, solve kernel selection, and automatically extract useful features from the data, on large datasets, using a special combination of expressive kernels and scalable inference. • We show the proposed methodology provides a distinct approach to texture extrapolation and inpainting; it was not previously known how to make GPs work for these fundamental applications. • Moreover, unlike typical inpainting approaches, such as patch-based methods (which work by recursively copying pixels or patches into a gap in an image, preserving neighbourhood similarities), GPatt is not restricted to spatial inpainting. This is demonstrated on a video extrapolation example, for which standard inpainting methods would be inapplicable [16]. Similarly, we apply GPatt to perform large-scale long range forecasting of land surface temperatures, through learning a sophisticated correlation structure across space and time. This learned correlation structure also provides insights into the underlying statistical properties of these data. • We demonstrate that GPatt can precisely recover sophisticated out-of-class kernels automatically. 2 Spectral Mixture Product Kernels for Pattern Discovery The spectral mixture kernel has recently been introduced [5] to offer a flexible kernel that can learn any stationary kernel. By appealing to Bochner’s theorem [17] and building a scale mixture of A Gaussian pairs in the spectral domain, [5] produced the spectral mixture kernel kSM(τ) = A X a=1 w2 aexp{−2π2τ 2σ2 a} cos(2πτµa) , (1) 2 which they applied to one-dimensional input data with a small number of points. For tractability with multidimensional inputs and large data, we propose a spectral mixture product (SMP) kernel: kSMP(τ|θ) = P Y p=1 kSM(τp|θp) , (2) where τp is the pth component of τ = x −x′ ∈RP , θp are the hyperparameters {µa, σ2 a, w2 a}A a=1 of the pth spectral mixture kernel in the product of Eq. (2), and θ = {θp}P p=1 are the hyperparameters of the SMP kernel. The SMP kernel of Eq. (2) has Kronecker structure which we exploit for scalable and exact inference in section 2.1. With enough components A, the SMP kernel of Eq. (2) can model any stationary product kernel to arbitrary precision, and is flexible even with a small number of components, since scale-location Gaussian mixture models can approximate many spectral densities. We use SMP-A as shorthand for an SMP kernel with A components in each dimension (for a total of 3PA kernel hyperparameters and 1 noise hyperparameter). Wilson [18, 19] contains detailed discussions of spectral mixture kernels. Critically, a GP with an SMP kernel is not a finite basis function method, but instead corresponds to a finite (A component) mixture of infinite basis function expansions. Therefore such a GP is a truly nonparametric method. This difference between a truly nonparametric representation – namely a mixture of infinite bases – and a parametric kernel method, a finite basis expansion corresponding to a degenerate GP, is critical both conceptually and practically, as our results will show. 2.1 Fast Exact Inference with Spectral Mixture Product Kernels Gaussian process inference and learning requires evaluating (K+σ2I)−1y and log |K+σ2I|, for an N × N covariance matrix K, a vector of N datapoints y, and noise variance σ2, as described in the supplementary material. For this purpose, it is standard practice to take the Cholesky decomposition of (K + σ2I) which requires O(N 3) computations and O(N 2) storage, for a dataset of size N. However, many real world applications are engineered for grid structure, including spatial statistics, sensor arrays, image analysis, and time sampling. [14] has shown that the Kronecker structure in product kernels can be exploited for exact inference and hyperparameter learning in O(PN 2 P ) storage and O(PN P +1 P ) operations, so long as the inputs x ∈X are on a multidimensional grid, meaning X = X1 × · · · × XP ⊂RP . Details are in the supplement. Here we relax this grid assumption. Assuming we have a dataset of M observations which are not necessarily on a grid, we propose to form a complete grid using W imaginary observations, yW ∼N(f W , ϵ−1IW ), ϵ →0. The total observation vector y = [yM, yW ]⊤has N = M + W entries: y = N(f, DN), where the noise covariance matrix DN = diag(DM, ϵ−1IW ), DM = σ2IM. The imaginary observations yW have no corrupting effect on inference: the moments of the resulting predictive distribution are exactly the same as for the standard predictive distribution, namely limϵ→0(KN + DN)−1y = (KM + DM)−1yM (proof in the supplement). For inference, we must evaluate (KN + DN)−1 y. Since DN is not a scaled identity (as is the usual case in Kronecker methods), we cannot efficiently decompose KN + DN, but we can efficiently take matrix vector products involving KN and DN. We therefore use preconditioned conjugate gradients (PCG) [20] to compute (KN + DN)−1 y, an iterative method involving only matrix vector products. We use the preconditioning matrix C = D−1/2 N to solve C⊤(KN + DN) Cz = C⊤y. The preconditioning matrix C speeds up convergence by ignoring the imaginary observations yW . Exploiting the fast multiplication of Kronecker matrices, PCG takes O(JPN P +1 P ) total operations (where the number of iterations J ≪N) to compute (KN + DN)−1 y to convergence within machine precision (supplement). This procedure can also be used to handle heteroscedastic noise. For learning (hyperparameter training) we must evaluate the marginal likelihood (supplement). We cannot efficiently compute the log |KM + DM| complexity penalty in the marginal likelihood, because KM is not a Kronecker matrix. We approximate the complexity penalty as log |KM + DM| = M X i=1 log(λM i + σ2) ≈ M X i=1 log(˜λM i + σ2) , (3) for noise variance σ2. We approximate the eigenvalues λM i of KM using the eigenvalues of KN such that ˜λM i = M N λN i for i = 1, . . . , M, which is particularly effective for large M (e.g. M > 1000) 3 [7]. [21] proves this eigenvalue approximation is asymptotically consistent (e.g., converges in the limit of large M), and [22] shows how one can bound the true eigenvalues by their approximation using PCA. Notably, only the log determinant (complexity penalty) term in the marginal likelihood undergoes a small approximation, and inference remains exact. All remaining terms in the marginal likelihood can be computed exactly and efficiently using PCG. The total runtime cost of hyperparameter learning and exact inference with an incomplete grid is thus O(PN P +1 P ). In image problems, for example, P = 2, and so the runtime complexity reduces to O(N 1.5). Although the proposed inference can handle non-grid data, this inference is most suited to inputs where there is some grid structure – images, video, spatial statistics, etc. If there is no such grid structure (e.g., none of the training data fall onto a grid), then the computational expense necessary to augment the data with imaginary grid observations can be prohibitive. Although incomplete grids have been briefly considered in, e.g. [23], such approaches generally involve costly and numerically unstable rank 1 updates, inducing inputs, and separate (and restricted) treatments of ‘missing’ and ‘extra’ data. Moreover, the marginal likelihood, critical for kernel learning, is not typically considered in alternate approaches to incomplete grids. 3 Experiments In our experiments we combine the SMP kernel of Eq. (2) with the fast exact inference and learning procedures of section 2.1, in a GP method we henceforth call GPatt1,2. We contrast GPatt with many alternative Gaussian process kernel methods. We are particularly interested in kernel methods, since they are considered to be general purpose regression methods, but conventionally have difficulty with large scale multidimensional pattern extrapolation. Specifically, we compare to the recent sparse spectrum Gaussian process regression (SSGP) [6] method, which provides fast and flexible kernel learning. SSGP models the kernel spectrum (spectral density) as a sum of point masses, such that SSGP is a finite basis function (parametric) model, with as many basis functions as there are spectral point masses. SSGP is similar to the recent models of Le et al. [8] and Rahimi and Recht [9], except it learns the locations of the point masses through marginal likelihood optimization. We use the SSGP implementation provided by the authors at http://www.tsc.uc3m.es/˜miguel/downloads.php. To further test the importance of the fast inference (section 2.1) used in GPatt, we compare to a GP which uses the SMP kernel of section 2 but with the popular fast FITC [10, 24] inference, which uses inducing inputs, and is implemented in GPML (http://www.gaussianprocess.org/ gpml). We also compare to GPs with the popular squared exponential (SE), rational quadratic (RQ) and Mat´ern (MA) (with 3 degrees of freedom) kernels, catalogued in Rasmussen and Williams [1], respectively for smooth, multi-scale, and finitely differentiable functions. Since GPs with these kernels cannot scale to the large datasets we consider, we combine these kernels with the same fast inference techniques that we use with GPatt, to enable a comparison.3 Moreover, we stress test each of these methods in terms of speed and accuracy, as a function of available data and extrapolation range, and number of components. All of our experiments contain a large percentage of non-grid data, and we test accuracy and efficiency as a function of the percentage of missing data. In all experiments we assume Gaussian noise, to express the marginal likelihood of the data p(y|θ) solely as a function of kernel hyperparameters θ. To learn θ we optimize the marginal likelihood using BFGS. We use a simple initialisation scheme: any frequencies {µa} are drawn from a uniform distribution from 0 to the Nyquist frequency (1/2 the sampling rate), length-scales {1/σa} from a truncated Gaussian distribution, with mean proportional to the range of the data, and weights {wa} are initialised as the empirical standard deviation of the data divided by the number of components used in the model. In general, we find GPatt is robust to initialisation, particularly for N > 104 datapoints. We show a representative initialisation in the experiments. This range of tests allows us to separately understand the effects of the SMP kernel, a non-parametric representation, and the proposed inference methods of section 2.1; we will show that all are required for good extrapolation performance. 1We write GPatt-A when GPatt uses an SMP-A kernel. 2Experiments were run on a 64bit PC, with 8GB RAM and a 2.8 GHz Intel i7 processor. 3We also considered the model of [25], but this model is intractable for the datasets we considered and is not structured for the fast inference of section 2.1. 4 3.1 Extrapolating Metal Tread Plate and Pores Patterns We extrapolate the missing region, shown in Figure 1a, on a real metal tread plate texture. There are 12675 training instances (Figure 1a), and 4225 test instances (Figure 1b). The inputs are pixel locations x ∈R2 (P = 2), and the outputs are pixel intensities. The full pattern is shown in Figure 1c. This texture contains shadows and subtle irregularities, no two identical diagonal markings, and patterns that have correlations across both input dimensions. (a) Train (b) Test (c) Full (d) GPatt (e) SSGP (f) FITC (g) GP-SE (h) GP-MA (i) GP-RQ 0 0.5 0 1 2 w1 µ1 0 0.5 0 1 2 w2 µ2 Learned Initial (j) GPatt Initialisation (k) Train (l) GPatt (m) GP-MA (n) Train (o) GPatt (p) GP-MA Figure 1: (a)-(j): Extrapolation on a Metal Tread Plate Pattern. Missing data are shown in black. a) Training region (12675 points), b) Testing region (4225 points), c) Full tread plate pattern, d) GPatt30, e) SSGP with 500 basis functions, f) FITC with 500 inducing (pseudo) inputs, and the SMP-30 kernel, and GPs with the fast exact inference in section 2.1, and g) squared exponential (SE), h) Mat´ern (MA), and i) rational quadratic (RQ) kernels. j) Initial and learned hyperparameters using GPatt using simple initialisation. During training, weights of extraneous components automatically shrink to zero. (k)-(h) and (n)-(p): Extrapolation on tread plate and pore patterns, respectively, with added artifacts and non-stationary lighting changes. To reconstruct the missing and training regions, we use GPatt-30. The GPatt reconstruction shown in Fig 1d is as plausible as the true full pattern shown in Fig 1c, and largely automatic. Without hand crafting of kernel features to suit this image, exposure to similar images, or a sophisticated initialisation, GPatt has automatically discovered the underlying structure of this image, and extrapolated that structure across a large missing region, even though the structure of this pattern is not independent across the two spatial input dimensions. Indeed the separability of the SMP kernel represents only a soft prior assumption, and does not rule out posterior correlations between input dimensions. The reconstruction in Figure 1e was produced with SSGP, using 500 basis functions. In principle SSGP can model any spectral density (and thus any stationary kernel) with infinitely many components (basis functions). However, since these components are point masses (in frequency space), each component has highly limited expressive power. Moreover, with many components SSGP experiences practical difficulties regarding initialisation, over-fitting, and computation time (scaling quadratically with the number of basis functions). Although SSGP does discover some interesting structure (a diagonal pattern), and has equal training and test performance, it is unable to capture enough information for a convincing reconstruction, and we did not find that more basis functions improved performance. Likewise, FITC with an SMP-30 kernel and 500 inducing (pseudo) inputs cannot capture the necessary information to interpolate or extrapolate. On this example, FITC ran for 2 days, and SSGP-500 for 1 hour, compared to GPatt which took under 5 minutes. GPs with SE, MA, and RQ kernels are all truly Bayesian nonparametric models – these kernels are derived from infinite basis function expansions. Therefore, as seen in Figure 1 g), h), i), these methods are completely able to capture the information in the training region; however, these kernels do not have the proper structure to reasonably extrapolate across the missing region – they simply act as smoothing filters. Moreover, this comparison is only possible because we have implemented these GPs using the fast exact inference techniques introduced in section 2.1. 5 (a) Runtime Stress Test (b) Accuracy Stress Test 0 50 0 0.5 1 τ k1 0 50 0 0.5 1 τ k2 0 50 0 0.5 1 τ k3 True Recovered (c) Recovering Sophisticated Kernels Figure 2: Stress Tests. a) Runtime Stress Test. We show the runtimes in seconds, as a function of training instances, for evaluating the log marginal likelihood, and any relevant derivatives, for a standard GP with SE kernel (as implemented in GPML), FITC with 500 inducing (pseudo) inputs and SMP-25 and SMP-5 kernels, SSGP with 90 and 500 basis functions, and GPatt-100, GPatt-25, and GPatt-5. Runtimes are for a 64bit PC, with 8GB RAM and a 2.8 GHz Intel i7 processor, on the cone pattern (P = 2), shown in the supplement. The ratio of training inputs to the sum of imaginary and training inputs for GPatt is 0.4 and 0.6 for the smallest two training sizes, and 0.7 for all other training sets. b) Accuracy Stress Test. MSLL as a function of holesize on the metal pattern of Figure 1. The values on the horizontal axis represent the fraction of missing (testing) data from the full pattern (for comparison Fig 1a has 25% missing data). We compare GPatt-30 and GPatt-15 with GPs with SE, MA, and RQ kernels (and the inference of section 2.1), and SSGP with 100 basis functions. The MSLL for GPatt-15 at a holesize of 0.01 is −1.5886. c) Recovering Sophisticated Kernels. A product of three kernels (shown in green) was used to generate a movie of 112,500 training points. From this data, GPatt-20 reconstructs these component kernels (the learned SMP-20 kernel is shown in blue). All kernels are a function of τ = x −x′ and have been scaled by k(0). Overall, these results indicate that both expressive nonparametric kernels, such as the SMP kernel, and the specific fast inference in section 2.1, are needed to extrapolate patterns in these images. We note that the SMP-30 kernel used with GPatt has more components than needed for this problem. However, as shown in Fig. 1j, if the model is overspecified, the complexity penalty in the marginal likelihood shrinks the weights ({wa} in Eq. (1)) of extraneous components, as a proxy for model selection – an effect similar to automatic relevance determination [26]. Components which do not significantly contribute to model fit are automatically pruned, as shrinking the weights decreases the eigenvalues of K and thus minimizes the complexity penalty (a sum of log eigenvalues). The simple GPatt initialisation in Fig 1j is used in all experiments and is especially effective for N > 104. In Figure 1 (k)-(h) and (n)-(p) we use GPatt to extrapolate on treadplate and pore patterns with added artifacts and lighting changes. GPatt still provides a convincing extrapolation – able to uncover both local and global structure. Alternative GPs with the inference of section 2.1 can interpolate small artifacts quite accurately, but have trouble with larger missing regions. 3.2 Stress Tests and Recovering Complex 3D Kernels from Video We stress test GPatt and alternative methods in terms of speed and accuracy, with varying datasizes, extrapolation ranges, basis functions, inducing (pseudo) inputs, and components. We assess accuracy using standardised mean square error (SMSE) and mean standardized log loss (MSLL) (a scaled negative log likelihood), as defined in Rasmussen and Williams [1] on page 23. Using the empirical mean and variance to fit the data would give an SMSE and MSLL of 1 and 0 respectively. Smaller SMSE and more negative MSLL values correspond to better fits of the data. The runtime stress test in Figure 2a shows that the number of components used in GPatt does not significantly affect runtime, and that GPatt is much faster than FITC (using 500 inducing inputs) and SSGP (using 90 or 500 basis functions), even with 100 components (601 kernel hyperparameters). The slope of each curve roughly indicates the asymptotic scaling of each method. In this experiment, the standard GP (with SE kernel) has a slope of 2.9, which is close to the cubic scaling we expect. All other curves have a slope of 1 ± 0.1, indicating linear scaling with the number of training instances. However, FITC and SSGP are used here with a fixed number of inducing inputs and basis functions. More inducing inputs and basis functions should be used when there are more training instances – and these methods scale quadratically with inducing inputs and basis functions for a fixed number of training instances. GPatt, on the other hand, can scale linearly in runtime as a function of training 6 Table 1: We compare the test performance of GPatt-30 with SSGP (using 100 basis functions), and GPs using SE, MA, and RQ kernels, combined with the inference of section 3.2, on patterns with a train test split as in the metal treadplate pattern of Figure 1. We show the results as SMSE (MSLL). Rubber mat Tread plate Pores Wood Chain mail train, test 12675, 4225 12675, 4225 12675, 4225 14259, 4941 14101, 4779 GPatt 0.31 (−0.57) 0.45 (−0.38) 0.0038 (−2.8) 0.015 (−1.4) 0.79 (−0.052) SSGP 0.65 (−0.21) 1.06 (0.018) 1.04 (−0.024) 0.19 (−0.80) 1.1 (0.036) SE 0.97 (0.14) 0.90 (−0.10) 0.89 (−0.21) 0.64 (1.6) 1.1 (1.6) MA 0.86 (−0.069) 0.88 (−0.10) 0.88 (−0.24) 0.43 (1.6) 0.99 (0.26) RQ 0.89 (0.039) 0.90 (−0.10) 0.88 (−0.048) 0.077 (0.77) 0.97 (−0.0025) size, without any deterioration in performance. Furthermore, the fixed 2-3 orders of magnitude GPatt outperforms the alternatives is as practically important as asymptotic scaling. The accuracy stress test in Figure 2b shows extrapolation (MSLL) performance on the metal tread plate pattern of Figure 1c with varying holesizes, running from 0% to 60% missing data for testing (for comparison the hole in Fig 1a has 25% missing data). GPs with SE, RQ, and MA kernels (and the fast inference of section 2.1) all steadily increase in error as a function of holesize. Conversely, SSGP does not increase in error as a function of holesize – with finite basis functions SSGP cannot extract as much information from larger datasets as the alternatives. GPatt performs well relative to the other methods, even with a small number of components. GPatt is particularly able to exploit the extra information in additional training instances: only when the holesize is so large that over 60% of the data are missing does GPatt’s performance degrade to the same level as alternative methods. In Table 1 we compare the test performance of GPatt with SSGP, and GPs using SE, MA, and RQ kernels, for extrapolating five different patterns, with the same train test split as for the tread plate pattern in Figure 1. All patterns are shown in the supplement. GPatt consistently has the lowest SMSE and MSLL. Note that many of these datasets are sophisticated patterns, containing intricate details which are not strictly periodic, such as lighting irregularities, metal impurities, etc. Indeed SSGP has a periodic kernel (unlike the SMP kernel which is not strictly periodic), and is capable of modelling multiple periodic components, but does not perform as well as GPatt on these examples. We also consider a particularly large example, where we use GPatt-10 to perform learning and exact inference on the Pores pattern, with 383,400 training points, to extrapolate a large missing region with 96,600 test points. The SMSE is 0.077, and the total runtime was 2800 seconds. Images of the successful extrapolation are shown in the supplement. We end this section by showing that GPatt can accurately recover a wide range of kernels, even using a small number of components. To test GPatt’s ability to recover ground truth kernels, we simulate a 50 × 50 × 50 movie of data (e.g. two spatial input dimensions, one temporal) using a GP with kernel k = k1k2k3 (each component kernel in this product operates on a different input dimension), where k1 = kSE +kSE ×kPER, k2 = kMA ×kPER +kMA ×kPER, and k3 = (kRQ +kPER)×kPER +kSE. (kPER(τ) = exp[−2 sin2(π τ ω)/ℓ2], τ = x −x′). We use 5 consecutive 50 × 50 slices for testing, leaving a large number N = 112500 of training points, providing much information to learn the true generating kernels. Moreover, GPatt-20 reconstructs these complex out of class kernels in under 10 minutes, as shown in Fig 2c. In the supplement, we show true and predicted frames from the movie. 3.3 Wallpaper and Scene Reconstruction and Long Range Temperature Forecasting Although GPatt is a general purpose regression method, it can also be used for inpainting: image restoration, object removal, etc. We first consider a wallpaper image stained by a black apple mark, shown in Figure 3. To remove the stain, we apply a mask and then separate the image into its three channels (red, green, and blue), resulting in 15047 pixels in each channel for training. In each channel we ran GPatt using SMP-30. We then combined the results from each channel to restore the image without any stain, which is impressive given the subtleties in the pattern and lighting. In our next example, we wish to reconstruct a natural scene obscured by a prominent rooftop, shown in the second row of Figure 3a). By applying a mask, and following the same procedure as for the stain, this time with 32269 pixels in each channel for training, GPatt reconstructs the scene without the rooftop. This reconstruction captures subtle details, such as waves, with only a single 7 (a) Inpainting 0 50 100 0 0.5 1 Time [mon] 0 50 0.2 0.4 0.6 0.8 Y [Km] 0 50 0 0.5 1 Correlations X [Km] (b) Learned GPatt Kernel for Temperatures 0 5 0.2 0.4 0.6 0.8 Time [mon] 0 50 0.2 0.4 0.6 0.8 Y [Km] 0 20 40 0.2 0.4 0.6 0.8 Correlations X [Km] (c) Learned GP-SE Kernel for Temperatures Figure 3: a) Image inpainting with GPatt. From left to right: A mask is applied to the original image, GPatt extrapolates the mask region in each of the three (red, blue, green) image channels, and the results are joined to produce the restored image. Top row: Removing a stain (train: 15047 × 3). Bottom row: Removing a rooftop to restore a natural scene (train: 32269×3). We do not extrapolate the coast. (b)-(c): Kernels learned for land surface temperatures using GPatt and GP-SE. training image. In fact this example has been used with inpainting algorithms which were given access to a repository of thousands of similar images [27]. The results emphasized that conventional inpainting algorithms and GPatt have profoundly different objectives, which are sometimes even at cross purposes: inpainting attempts to make the image look good to a human (e.g., the example in [27] placed boats in the water), while GPatt is a general purpose regression algorithm, which simply aims to make accurate predictions at test input locations, from training data alone. For example, GPatt can naturally learn temporal correlations to make predictions in the video example of section 3.2, for which standard patch based inpainting methods would be inapplicable [16]. Similarly, we use GPatt to perform long range forecasting of land surface temperatures. After training on 108 months (9 years) of temperature data across North America (299,268 training points; a 71 × 66 × 108 completed grid, with missing data for water), we forecast 12 months (1 year) ahead (33,252 testing points). The runtime was under 30 minutes. The learned kernels using GPatt and GPSE are shown in Figure 3 b) and c). The learned kernels for GPatt are highly non-standard – both quasi periodic and heavy tailed. These learned correlation patterns provide insights into features (such as seasonal influences) which affect how temperatures vary in space and time. Indeed learning the kernel allows us to discover fundamental properties of the data. The temperature forecasts using GPatt and GP-SE, superimposed on maps of North America, are shown in the supplement. 4 Discussion Large scale multidimensional pattern extrapolation problems are of fundamental importance in machine learning, where we wish to develop scalable models which can make impressive generalisations. However, there are many obstacles towards applying popular kernel methods, such as Gaussian processes, to these fundamental problems. We have shown that a combination of expressive kernels, truly Bayesian nonparametric representations, and inference which exploits model structure, can distinctly enable a kernel approach to these problems. Moreover, there is much promise in further exploring Bayesian nonparametric kernel methods for large scale pattern extrapolation. Such methods can be extremely expressive, and expressive methods are most needed for large scale problems, which provide relatively more information for automatically learning a rich statistical representation of the data. Acknowledgements AGW thanks ONR grant N000141410684 and NIH grant R01GM093156. JPC thanks Simons Foundation grants SCGB #325171, #325233, and the Grossman Center at Columbia. 8 References [1] C.E. Rasmussen and C.K.I. Williams. Gaussian processes for Machine Learning. The MIT Press, 2006. [2] C.E. Rasmussen. Evaluation of Gaussian Processes and Other Methods for Non-linear Regression. PhD thesis, University of Toronto, 1996. [3] A. O’Hagan. Curve fitting and optimal design for prediction. Journal of the Royal Statistical Society, B (40):1–42, 1978. [4] M. G¨onen and E. Alpaydın. Multiple kernel learning algorithms. Journal of Machine Learning Research, 12:2211–2268, 2011. [5] A.G. Wilson and R.P. Adams. Gaussian process kernels for pattern discovery and extrapolation. International Conference on Machine Learning, 2013. [6] M. L´azaro-Gredilla, J. Qui˜nonero-Candela, C.E. Rasmussen, and A.R. Figueiras-Vidal. Sparse spectrum Gaussian process regression. Journal of Machine Learning Research, 11:1865–1881, 2010. [7] C.K.I. Williams and M. Seeger. Using the Nystr¨om method to speed up kernel machines. In Advances in Neural Information Processing Systems, pages 682–688. MIT Press, 2001. [8] Q. Le, T. Sarlos, and A. Smola. Fastfood-computing Hilbert space expansions in loglinear time. In International Conference on Machine Learning, pages 244–252, 2013. [9] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In Neural Information Processing Systems, 2007. [10] E. Snelson and Z. Ghahramani. Sparse gaussian processes using pseudo-inputs. In Advances in neural information processing systems, volume 18, page 1257. MIT Press, 2006. [11] J. Hensman, N. Fusi, and N.D. Lawrence. Gaussian processes for big data. In Uncertainty in Artificial Intelligence (UAI). AUAI Press, 2013. [12] M. Seeger, C.K.I. Williams, and N.D. Lawrence. Fast forward selection to speed up sparse Gaussian process regression. In Workshop on AI and Statistics, volume 9, 2003. [13] J. Qui˜nonero-Candela and C.E. Rasmussen. A unifying view of sparse approximate Gaussian process regression. The Journal of Machine Learning Research, 6:1939–1959, 2005. [14] Y. Saatc¸i. Scalable Inference for Structured Gaussian Process Models. PhD thesis, University of Cambridge, 2011. [15] E. Gilboa, Y. Saatc¸i, and J.P. Cunningham. Scaling multidimensional inference for structured Gaussian processes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013. [16] C. Guillemot and O. Le Meur. Image inpainting: Overview and recent advances. Signal Processing Magazine, IEEE, 31(1):127–144, 2014. [17] S. Bochner. Lectures on Fourier Integrals, volume 42. Princeton University Press, 1959. [18] A.G. Wilson. A process over all stationary kernels. June, 2012. Technical Report, University of Cambridge. http://www.cs.cmu.edu/˜andrewgw/spectralkernel.pdf. [19] A.G. Wilson. Covariance Kernels for Fast Automatic Pattern Discovery and Extrapolation with Gaussian Processes. PhD thesis, University of Cambridge, 2014. URL http://www.cs.cmu.edu/ ˜andrewgw/andrewgwthesis.pdf. [20] K.E. Atkinson. An introduction to numerical analysis. John Wiley & Sons, 2008. [21] C.T.H. Baker. The numerical treatment of integral equations. 1977. [22] C.K.I. Williams and J. Shawe-Taylor. The stability of kernel principal components analysis and its relation to the process eigenspectrum. In Advances in Neural Information Processing Systems, volume 15, page 383. MIT Press, 2003. [23] Y. Luo and R. Duraiswami. Fast near-grid Gaussian process regression. In International Conference on Artificial Intelligence and Statistics, 2013. [24] A. Naish-Guzman and S. Holden. The generalized FITC approximation. In Advances in Neural Information Processing Systems, pages 1057–1064, 2007. [25] D. Duvenaud, J.R. Lloyd, R. Grosse, J.B. Tenenbaum, and Z. Ghahramani. Structure discovery in nonparametric regression through compositional kernel search. In International Conference on Machine Learning, 2013. [26] D.J.C MacKay. Bayesian nonlinear modeling for the prediction competition. Ashrae Transactions, 100 (2):1053–1062, 1994. [27] J. Hays and A. Efros. Scene completion using millions of photographs. Communications of the ACM, 51 (10):87–94, 2008. 9
|
2014
|
183
|
5,273
|
Recursive Context Propagation Network for Semantic Scene Labeling Abhishek Sharma University of Maryland College Park, MD bhokaal@cs.umd.edu Oncel Tuzel Ming-Yu Liu Mitsubishi Electric Research Labs (MERL) Cambridge, MA {oncel,mliu}@merl.com Abstract We propose a deep feed-forward neural network architecture for pixel-wise semantic scene labeling. It uses a novel recursive neural network architecture for context propagation, referred to as rCPN. It first maps the local visual features into a semantic space followed by a bottom-up aggregation of local information into a global representation of the entire image. Then a top-down propagation of the aggregated information takes place that enhances the contextual information of each local feature. Therefore, the information from every location in the image is propagated to every other location. Experimental results on Stanford background and SIFT Flow datasets show that the proposed method outperforms previous approaches. It is also orders of magnitude faster than previous methods and takes only 0.07 seconds on a GPU for pixel-wise labeling of a 256 × 256 image starting from raw RGB pixel values, given the super-pixel mask that takes an additional 0.3 seconds using an off-the-shelf implementation. 1 Introduction Semantic labeling aims at getting pixel-wise dense labeling of an image in terms of semantic concepts such as tree, road, sky, water, foreground objects etc. Mathematically, the problem can be framed as a mapping from a set of nodes arranged on a 2D grid (pixels) to the semantic categories. Typically, this task is broken down into two steps - feature extraction and inference. Feature extraction involves retrieving descriptive information useful for semantic labeling under varying illumination and view-point conditions. These features are generally color, texture or gradient based and extracted from a local patch around each pixel. Inference step consists of predicting the labels of the pixels using the extracted features. The rich diversity in the appearance of even simple concepts (sky, water, grass) makes the semantic labeling very challenging. Surprisingly, human performance is almost close to perfect on this task. This striking difference of performance has been a heated field of research in vision community. Past experiences and recent research [1, 2, 3] have conclusively established that the ability of humans to utilize the information from the entire image is the main reason behind the large performance gap. Interestingly, [2, 3] have shown that human performance in labeling a small local region (super-pixel) is worse than a computer when both are looking at only that region of the image. Motivated from these observations, increasingly sophisticated inference algorithms have been developed to utilize the information from the entire image. Conditional Random Fields (CRFs) [4] and Structured Support Vector Machines (SVMs) [5] are among the most successful and widely used algorithms for inference. We model the semantic labeling task as a mapping from the set of all pixels in an image I to the corresponding label image Y. We have several design considerations: (1) the mapping should be evaluated fast, (2) it should utilize the entire image such that every location influences the labeling of every other location, (3) mapping parameters should be learned from the training data, (4) it should scale to different image sizes. In addition, good generalization requires limiting the capacity of 1 Semantic labels: SKY WATER BLDG BOAT TREE Figure 1: Conceptual illustration of recursive context propagation network (rCPN). rCPN recursively aggregates contextual information from local neighborhoods to the entire image and then disseminates global context information back to individual local features. In this example, starting from confusion between boat and building, the propagated context information helps resolve the confusion by using the feature of the water segment. the mapping while still utilizing the entire image information at once. For example, a simple fullyconnected-linear mapping from I to Y requires 4 Trillion parameters for an image of size 256×256, but it will fail to generalize under practical conditions of limited training data. Considering the requirements discussed above, we designed the mapping as a single feed-forward neural network with carefully controlled capacity by parameter sharing. All the network parameters are learned from the data and the feed-forward structure allows fast inference. The proposed network can be functionally partitioned into two sub-networks: local feature extraction and recursive context propagation. As the name implies, local-feature extraction refers to the extraction of pixel- or region-wise visual features for semantic labeling. We used the multi scale convolutional neural network (Multi-CNN) architecture proposed in [6] to get pixel-wise features. Convolutional structure with shared parameters brings down the number of parameters for local feature extraction. We propose a novel recursive context propagation network (rCPN), which, starting from the local features, recursively aggregates contextual information from local neighborhoods up to the entire image and then disseminates the aggregated information back to individual local features for better semantic classification. rCPN is a recursive neural network with shared parameters through the parse tree hierarchy. A conceptual illustration of this network is given in Figure 1. The scene consists of three segments corresponding to a boat, a tree and a water/sky region. The nodes of the graph (formed by a binary parse tree and its inversion) represent semantic description of the segments. The distributions on the left are probable label distributions for the adjacent segments based on their appearance. Initially (at the bottom), the boat can be confused as a white building, while looking only at the bottom-left segment. The rCPN recursively combines two segment descriptions and produces the semantic description of the combined segment. For example, as the tree is combined with the boat, the belief that the combined segment includes a building increased since usually they appear together in the images. Similarly, after we merge the water/sky segment description with this segment description, the probability of the boat increased since the simultaneous occurrence of water and building is rare. The middle node in the graph (root node of the segmentation tree) 2 corresponds to the semantic description of the entire image. After all the segment descriptions are merged into a single holistic description of the entire image, this information is propagated to the local regions. It is achieved by recursive updates of the semantic descriptions of the segments given the descriptions of their parent segments. Finally, contextually enhanced descriptions of the leaf nodes are used to label the segments. Note that, rCPN propagates segment semantic descriptions but not the label distributions shown in the illustration. Our work is influenced by Socher et al.’s work [7] that learns a non-linear mapping from feature space to a semantic space, termed as semantic mapping. It is learned by optimizing a structure prediction cost on the ground-truth parse trees of training images or sentences. Next, a classifier is learned on the semantic mappings of the individual local features from the training images. At test time, local features are projected to the semantic space using the learned semantic mapping followed by classification. Therefore, only the information contained in each individual local feature is used for labeling. In contrast, we use recursive bottom-top-bottom paths on randomly generated parse trees to propagate contextual information from local regions to all other regions in the image. Therefore, our approach uses entire image information for labeling each local region. Please see experiments section for detailed comparison. The main contributions of the proposed approach are: • The proposed model is scalable. It is a combination of a CNN and a recursive neural network which is trained without using any human-designed features. In addition, convolution and recursive structure allows scaling to arbitrary image sizes while still utilizing the entire image content at once. • We achieved state-of-the-art labeling accuracy on two important benchmarks while being an order of magnitude faster than the existing methods due to feed-forward operations. It takes only 0.07 seconds on GPU and 0.8 seconds on CPU for pixel-wise semantic labeling of a 256 × 256 image, with a given super-segmentation mask, that can be computed using an off-the-shelf algorithm within 0.3 second. • Proposed rCPN module can be used in conjunction with pre-computed features to propagate context information through the structure of an image (see experiments section) and potentially for other structured prediction tasks. 2 Semantic labeling architecture In this section we describe our semantic labeling architecture and discuss the design choices for practical considerations. An illustration of this architecture is shown in Figure 2. The input image is fed to a CNN, FCNN, which extracts local features per pixel. We then use a super-pixel tessellation of the input image and average pool the local features within the same super-pixel. Following, we use the proposed rCPN to recursively propagate the local information throughout the image using a parse tree hierarchy and finally label the super-pixels. 2.1 Local feature extraction We used the multi scale CNN architecture proposed in Farabet et al. [6] for extracting per pixel local features. This network has three convolutional stages which are organized as 8 × 8 × 16 conv → 2×2 maxpool →7×7×64 conv →2×2 maxpool →7×7×256 conv configuration, each maxpooling is non-overlapping. After each convolution we apply a rectified linear (ReLU) nonlinearity. Unlike [6], we do not preprocess the input raw RGB images other than scaling it between 0 to 1, and centering by subtracting 0.5. Tied filters are applied separately at three scales of the Gaussian pyramid. The final feature maps at lower scales are spatially scaled up to the size of the feature map at the highest scale and concatenated to get 256 × 3 = 768 dimensional features per pixel. The obtained pixel features are fed to a Softmax classifier for final classification. Please refer to [6] for more details. After training, we drop the final Softmax layer and use the 768 dimensional features as local features. Note that the 768 dimensional concatenated output feature map is still 1/4th of the height and width of the input image due to the max-pooling operations. To obtain the input size per pixel feature map we either (1) shift the input image by one pixel on a 4 × 4 grid to get 16 output feature maps that are 3 Recursive Context Propagation Network (rCPN) Local Feature Extraction 𝐹𝑠𝑒𝑚 𝐹𝑐𝑜𝑚 𝐹𝑠𝑒𝑚 𝒙1 𝒙2 𝒙12 𝐹𝑐𝑜𝑚 𝒙34 𝒙𝟏𝟐𝟑𝟒 𝐹𝑑𝑒𝑐 𝐹𝑑𝑒𝑐 𝒙 12 𝒙 1 𝐹𝑙𝑎𝑏 SKY WATER BLDG BOAT TREE 𝐹𝐶𝑁𝑁 + 𝐹𝑠𝑒𝑚 𝒗𝟏 𝒗𝟐 𝒗𝑠 super-pixels 𝐈 𝐕 𝐹𝑑𝑒𝑐 𝐹𝑙𝑎𝑏 𝒙 𝑠 𝐹𝑐𝑜𝑚 𝒙 34 𝒙𝑠 Figure 2: Overview of semantic scene labeling architecture combined to get the full-resolution image, or (2) scale-up each feature map by a factor of 4 using bilinear interpolation. We refer to the later strategy as fast feature map generation in experiments section. Super-pixel representation: Although it is possible to do per pixel classification using the rCPN, learning such a model is computationally intensive and the resulting network is too deep to propagate the gradients efficiently due to recursion. To reduce the complexity, we utilize a super-pixel segmentation algorithm [8], which provides the desired number of super-pixels per image. This algorithm uses pairwise color similarity together with an entropy rate criteria to produce homogenous super-pixels with roughly equal sizes. We average pool the local features within the same superpixel and retrieve s local features, {vi}i=1...s, one per super-pixel. In our experiments we used s = 100 super-pixels per image. 2.2 Recursive context propagation network rCPN consists of four neural networks: Fsem maps local features to the semantic space in which the local information is propagated to other segments; Fcom recursively aggregates local information from smaller segments to larger segments through a parse tree hierarchy to capture a holistic description of the image; Fdec recursively disseminates the holistic description to smaller segments using the same parse tree; and Flab classifies the super-pixels utilizing the contextually enhanced features. Parse tree synthesis: Both for training and inference, the binary parse trees that are used for propagating information through the network are synthesized at random. We used a simple agglomerative algorithm to synthesize the trees by combining sub-trees (starting from a single node) according to the neighborhood information. To reduce the complexity and avoid degenerate solutions, the synthesis algorithm favors roughly balanced parse trees by greedily selecting sub-trees with smaller heights at random. Note that, we use parse trees only as a tool to propagate the contextual information throughout the image. Therefore, we are not limited to the parse trees that represent an accurate hierarchical segmentation of the image unlike [9, 7]. Semantic mapping network is a feed-forward neural network which maps the local features to the dsem dimensional semantic vector space xi = Fsem(vi; θsem), (1) where θsem is the model parameter. The aim of the semantic features is to capture a joint representation of the local features and the context, and being able to propagate this information through a parse tree hierarchy to other super-pixels. Combiner network is a recursive neural network which recursively maps the semantic features of two child nodes (super-pixels) in the parse tree to obtain the semantic feature of the parent node (combination of the two child nodes) xi,j = Fcom([xi, xj]; θcom). (2) 4 Intuitively, combiner network attempts to aggregate the semantic content of the children nodes such that the parent node becomes representative of its children. The information is recursively aggregated bottom-up from super-pixels to the root node through the parse tree. The semantic features of the root node correspond to the holistic description of the entire image. Decombiner network is a recursive neural network which recursively disseminates the context information from the parent nodes to the children nodes throughout the parse tree hierarchy. This network maps the semantic features of the child node and its parent to the contextually enhanced feature of the child node ˜xi = Fdec([˜xi,j, xi]; θdec). (3) Since we start from the root feature of the entire image and apply the decombiner network top-down recursively until we reach the super-pixel features, every super-pixel feature contains the contextual information aggregated from the entire image. Therefore, it is influenced by every other super-pixel in the image. Labeler network is the final feed forward network which maps the contextually enhanced semantic features (˜xi) of each super-pixel to one of the semantic category labels yj = Flab(˜xi; θlab). (4) Contextually enhanced features contain both local and global context information, thereby leading to better classification. Side information: It is possible to input information to the recursive networks not only at the leaf nodes but also at any level of the parse tree. The side information can encode the static knowledge about the parse tree nodes and is not recurred through the tree. In our implementation we used average x and y locations of the nodes and their sizes as the side information. 3 Learning Proposed labeling architecture is a feed-forward neural network that can be trained end-to-end. However, the recursion makes the depth of the neural network too deep for an efficient joint training. Therefore, we first learn the CNN parameters (θCNN) using the raw image and the ground truth per pixel labels. The trained CNN model is used to extract super-pixel features followed by training of rCPN (θrCP N = [θsem, θcom, θdec, θlab]) to predict the ground truth super-pixel labels. Feature extractor CNN is trained on a GPU using a publicly available implementation CAFFE [10]. In order to avoid over-fitting we used data augmentation and dropout. All the training images were flipped horizontally to get twice the original images. We used dropout in the last layer with dropout ratio equal to 0.5. Standard back-propagation for CNN is used with stochastic gradient descent update scheme on mini-batches of 6 images, with weight decay (λ = 5 × 10−5) and momentum (µ = 0.9). It typically took 6-8 hours of training on a GPU as compared to 3-5 days training on a CPU as reported in [6]. We found that simply using RGB images with ReLU units and dropout gave slightly better pixel-wise accuracy as compared to [6]. rCPN parameters are trained using back-propagation through structure [11], which back-propagates the error through the parse tree, from Flab to Fsem. The basic idea is to split the error message at each node and propagate it to the children nodes. Limited memory BFGS [12] with line-search is used for parameter updates using publicly available implementation1. From each super-pixel we obtained 5 different features by average pooling a random subset of pixels within the super-pixel (as opposed to average pooling all the pixels), and used a different random parse tree for each set of random feature, thus we increased our training data by a factor of 5. It typically took 600 to 1000 iterations for complete training. 4 Experiments We extensively tested the proposed model on two widely used datasets for semantic scene labeling Stanford background [13] and SIFT Flow [14]. Stanford background dataset contains 715 color images of outdoor scenes, it has 8 classes and the images are approximately 240 × 320 pixels. 1http://www.di.ens.fr/˜mschmidt/Software/minFunc.html 5 We used the 572 train and 143 test image split provided by [7] for reporting the results. SIFT Flow contains 2688, 256 × 256 color images with 33 semantic classes. We experimented with the train/test (2488/200) split provided by the authors of [15]. We have used three evaluation metrics Per pixel accuracy (PPA): ratio of the correct pixels to the total pixels in the test images; Mean class accuracy (MCA): mean of the category-wise pixel accuracy; Time per image (Time): time required to label an input image starting from the raw image input, we report our time on both GPU and CPU. The local feature extraction through Multi-CNN [6] encodes contextual information due to large field of view (FOV); the FOV for 1, 1/2 and 1/4 scaled input images is 47 × 47, 94 × 94 and 188×188 pixels, respectively. Therefore, we designed the experiments under single and multi scale settings to assess rCPN’s contribution. Mutli-CNN + rCPN refers to the case where feature maps from all the three scales (1,1/2 and 1/4), 3 × 256 = 768 dimensional local feature, for each pixel are used. Single-CNN + rCPN refers to the case where only the 256 feature maps corresponding to the original resolution image are used. Evidently, the amount of contextual information in the local features of Single-CNN is significantly lesser than that of Multi-CNN because of smaller FOV. All the individual modules in rCPN, Fsem, Fcom, Fdec and Flab, are single layer neural networks with ReLU non-linearity and dsem = 60 for all the experiments. We used 20 randomly generated parse trees for each image and used voting for the final super-pixel labels. We did not optimize these hyper-parameters and believe that parameter-tuning can further increase the performance. The baseline is two-layer neural network with 60 hidden neurons classifier with Single-CNN or MultiCNN features of super-pixels and referred to as Multi/Single-CNN + Plain NN. 4.1 SIFT Flow dataset We used 100 super-pixels per image obtained by method of [8]. The result on SIFT Flow database is shown in Table 1. From the comparison it is clear that we outperform all the other previous methods on pixel accuracy while being an order of magnitude faster. Farabet et al. [6] improved the mean class accuracy by training a model based on the balanced class frequency. Since some of the classes in SIFT Flow dataset are under represented, the class accuracies for them are very low. Therefore, following [6], we also trained a balanced rCPN model that puts more weights on the errors for rare classes as compared to the dominant ones, referred to as Multi-CNN + rCPN balanced. Smoothed inverse frequency of the pixels of each category is used as the weights. Balanced training helped improve our mean class accuracy from 33.6 % to 48.0 %, which is still slightly worse than [6] (48.0 % vs 50.8 %), but our pixel accuracy is higher (75.5 % vs 72.3 %). Multi-CNN + rCPN performed better than Single-CNN + rCPN and both performed significantly better than Plain NN approaches, because the later approaches do not utilize global contextual information. We also observed that the relative improvement over Plain NN was more with Single-CNN features which uses less context information than that of Multi-CNN. 4.2 Stanford background dataset We used publicly available super-pixels provided by [7] with our CNN based local features to obtain super-pixel features. A comparison of our results with previous approaches on Stanford background database is shown in Table 2. We outperform previous approaches on all the performance metrics. Interestingly, we observe that Single-CNN + rCPN performs better than Multi-CNN + rCPN for pixel accuracy. We believe that it is due to over-fitting on high-dimensional Multi-CNN features and relatively smaller training data size with only 572 images. Once again the improvement due to rCPN over plain NN is more prominent in the case of Single-CNN features. Model analysis: In this section, we analyze the performance of individual components of the proposed model. First, we use rCPN with hand-designed features to evaluate the performance of context model alone, beyond the learned local features using CNN. We utilize the visual features and superpixels used in semantic mapping and CRF labeling framework [7, 13], and trained our rCPN module. The results are presented in Table 3. We see that rCPN module significantly improves upon the existing context models, namely a CRF model used in [13] and semantic space proposed in [7]. In addition, CNN based visual features improve over the hand-designed features. Next, we analyze the performance of combiner and decombiner networks separately. To evaluate combiner network in isolation, we first obtain the semantic mapping (xi) of each super-pixel’s 6 Table 1: SIFT Flow result Method PPA MCA Time (s) CPU/GPU Tighe, [15] 77.0 30.1 8.4 / NA Liu, [14] 76.7 NA 31 / NA Singh, [16] 79.2 33.8 20 / NA Eigen, [17] 77.1 32.5 16.6 / NA Farabet, [6] 78.5 29.6 NA / NA (Balanced), [6] 72.3 50.8 NA / NA Tighe, [18] 78.6 39.2 ≥8.4 / NA Pinheiro, [19] 77.7 29.8 NA / NA Single-CNN + Plain NN 72.8 25.5 5.1/0.5 Multi-CNN + Plain NN 76.3 32.1 13.1/1.4 Single-CNN + rCPN 77.2 25.5 5.1/0.5 Multi-CNN + rCPN 79.6 33.6 13.1/1.4 Multi-CNN + rCPN Balanced 75.5 48.0 13.1/1.4 Multi-CNN + rCPN Fast 79.5 33.4 1.1/0.37 Table 2: Stanford background result Method PPA MCA Time (s) CPU/GPU Gould, [13] 76.4 NA 30 to 600 / NA Munoz, [20] 76.9 NA 12 / NA Tighe, [15] 77.5 NA 4 / NA Kumar, [21] 79.4 NA ≤600 / NA Socher, [7] 78.1 NA NA / NA Lempitzky, [9] 81.9 72.4 ≥60 / NA Singh, [16] 74.1 62.2 20 / NA Farabet, [6] 81.4 76.0 60.5 / NA Eigen, [17] 75.3 66.5 16.6 / NA Pinheiro, [19] 80.2 69.9 10.6 / NA Single-CNN + Plain NN 80.1 69.7 5.1/0.5 Multi-CNN + Plain NN 80.9 74.4 13.1/1.4 Single-CNN + rCPN 81.9 73.6 5.1/0.5 Multi-CNN + rCPN 81.0 78.8 13.1/1.4 Multi-CNN + rCPN Fast 80.9 78.8 1.1/0.37 Table 3: Stanford hand-designed local feature Method 2-layer NN [7] CRF [13] Semantic space [7] proposed rCPN PPA 76.1 76.4 78.1 81.4 visual feature using rCPN’s Fsem and append to it the root feature of the entire image to obtain xcom i = [xi, xroot]. Then we train a separate Softmax classifier on xcom i . This resulted in better performance for both Single-scale (PPA: 80.4 & MCA: 71.5) and Multi-scale (PPA: 80.8 & MCA: 79.1) CNN feature settings over (Single/Multi)-CNN + Plain NN. As was previously shown in Table 2, decombiner network further improves this model. Computation speed: Our fast method (Section 2.1) takes only 0.37 seconds (0.3 for super-pixel segmentation, 0.06 for feature extraction and 0.01 for rCPN and labeling) to label a 256×256 image starting from the raw RGB image on a GTX Titan GPU and 1.1 seconds on a Intel core i7 CPU. In both of the experiments the performance loss is negligible using the fast method. Interestingly, the time bottleneck of our approach on a GPU is the super-pixel segmentation time. Several typical labeling results on Stanford background dataset using the proposed semantic scene labeling algorithm are shown in Figure 3. 5 Related Work Scene labeling has two broad categories of approaches - non-parametric and model-based. Recently, many non-parametric approaches for natural scene parsing have been proposed [15, 14, 16, 17, 18]. The underlying theme is to find similar looking images to the query image from a database of pixelwise labeled images, followed by super-pixel label transfer from the retrieved images to the query image. Finally, a structured prediction model such as CRF is used to integrate contextual information to obtain the final labeling. These approaches mainly differ in the retrieval of candidate images or super-pixels, transfer of label from the retrieved candidates to the query image, and the form of the structured prediction model used for final labeling. They are based on nearest-neighbor retrieval that introduces a performance/accuracy tradeoff. The variations present in natural scene images are large and it is very difficult to cover this entire space of variation with a reasonable size database, which limits the accuracy of these methods. On the other extreme, a very large database would require large retrieval-time, which limits the scalability of these methods. 7 sky tree road grass water bldg mntn fig obj Figure 3: Typical labeling results on Stanford background dataset using our method Model-based approaches learn the appearance of semantic categories and relations among them using a parametric model. In [13, 20, 2, 3, 22], CRF models are used to combine unary potentials devised through the visual features extracted from super-pixels with the neighborhood constraints. The differences are mainly in terms of the visual features, unary potentials and the structure of the CRF model. Lempitsky et al. [9] have formulated a joint-CRF on multiple levels of an image segmentation hierarchy to achieve better results than a flat-CRF on the image super-pixels only. Socher et al. [7] learnt a mapping from the visual features to a semantic space followed by a twolayer neural network for classification. Better use of contextual information, with the same superpixels and features, increased the performance on Stanford background dataset from CRF based method of Gould et al. to semantic mapping of Socher et al. to the proposed work (76.4% → 78.1% →81.4%). It indicates that neural network based models have the potential to learn more complicated interactions than a CRF. Moreover, they have the advantage of being extremely fast, due to the feed-forward nature. Farabet et al. [6] proposed to learn the visual features from image/label training pairs using a multi-scale convolutional neural network. They reported state-of-the-art results on various datasets using gPb, purity-cover and CRF on top of their learned features. Pinheiro et al. [19] extended their work by feeding in the per-pixel predicted labels using a CNN classifier to the next stage of the same CNN classifier. However, their propagation structure is not adaptive to the image content and only propagating label information did not improve much over the prior work. Similar to these methods, we also make use of the Multi-CNN module to extract local features in our pipeline. However, our novel context propagation network shows that propagating semantic representation bottom-up and top-down using a parse three hierarchy is a more effective way to aggregate global context information. Please see Tables 1 and 2 for a detailed comparison of our method with the methods discussed above. CRFs model the joint distribution of the output variables given the observations and can include higher order potentials in addition to the unary potentials. Higher order potentials allow these models to represent the dependencies between the output variables, which is important for structured prediction tasks. On the downside, except for a few exceptions such as non-loopy models, inference in these models is NP-Hard that can be only approximately solved and is time consuming. Moreover, parameter learning procedures that are tractable usually limit the form of the potential functions to simple forms such as linear models. In contrast, in our model, we can efficiently learn complex relations between a single output variable and all the observations from an image, allowing a large context to be considered effectively. Additionally, the inference procedure is a simple feed-forward pass that can be performed very fast. However, the form of our function is still a unary term and our model cannot represent higher order output dependencies. Our model can also be used to obtain the unary potential for a structured inference model. 6 Conclusion We introduced a novel deep neural network architecture, which is a combination of a convolutional neural network and recursive neural network, for semantic scene labeling. The key contribution is the recursive context propagation network, which effectively propagates contextual information from one location of the image to other locations in a feed-forward manner. This structure led to the state-of-the-art semantic scene labeling results on Stanford background and SIFT Flow datasets with very fast processing speed. Next we plan to scale-up our model for recently introduced large scale learning task [23]. 8 References [1] A. Torralba, K.P. Murphy, W.T. Freeman, and M.A. Rubin. Context-based vision system for place and object recognition. IEEE CVPR, 2003. [2] Roozbeh Mottaghi, Sanja Fidler, Jian Yao, Raquel Urtasun, and Devi Parikh. Analyzing semantic segmentation using hybrid human-machine crfs. IEEE CVPR, 2013. [3] Roozbeh Mottaghi, Xianjie Chen, Xiaobai Liu, Nam-Gyu Cho, Seong-Whan Lee, Sanja Fidler, Raquel Urtasun, and Alan Yuille. The role of context for object detection and semantic segmentation in the wild. IEEE CVPR, 2014. [4] John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. ICML, pages 282–289, 2001. [5] Ioannis Tsochantaridis, Thorsten Joachims, Thomas Hofmann, Yasemin Altun, and Yoram Singer. Large margin methods for structured and interdependent output variables. Journal of Machine Learning Research, 6(2):1453, 2006. [6] Clement Farabet, Camille Couprie, Laurent Najman, and Yann LeCun. Learning hierarchical features for scene labeling. IEEE TPAMI, August 2013. [7] Richard Socher, Cliff Chiung-Yu Lin, Andrew Y. Ng, and Christopher D. Manning. Parsing natural scenes and natural language with recursive neural networks. ICML, 2011. [8] Ming-Yu Liu, Oncel Tuzel, Srikumar Ramalingam, and Rama Chellappa. Entropy rate superpixel segmentation. IEEE CVPR, 2011. [9] V. Lempitsky, A. Vedaldi, and A. Zisserman. A pylon model for semantic segmentation. NIPS, 2011. [10] Yangqing Jia. Caffe: An open source convolutional architecture for fast feature embedding. http://caffe.berkeleyvision.org/, 2013. [11] Christoph Goller and Andreas Kchler. Learning task-dependent distributed representations by backpropagation through structure. Int Conf. on Neural Network, 1995. [12] Dong C. Liu, Jorge Nocedal, and Dong C. On the limited memory bfgs method for large scale optimization. Mathematical Programming, 45:503–528, 1989. [13] Stephen Gould, Richard Fulton, and Daphne Koller. Decomposing a scene into geometric and semantically consistent regions. IEEE ICCV, 2009. [14] Ce Liu, Jenny Yuen, and Antonio Torralba. Nonparametric scene parsing via label transfer. IEEE TPAMI, 33(12), Dec 2011. [15] Joseph Tighe and Svetlana Lazebnik. Superparsing: Scalable nonparametric image parsing with superpixels. IJCV, 101:329–349, 2013. [16] Gautam Singh and Jana Kosecka. Nonparametric scene parsing with adaptive feature relevance and semantic context. IEEE CVPR, 2013. [17] R. Fergus and D. Eigen. Nonparametric image parsing using adaptive neighbor sets. IEEE CVPR, 2012. [18] Joseph Tighe and Svetlana Lazebnik. Finding things: Image parsing with regions and perexemplar detectors. IEEE CVPR, 2013. [19] Pedro H. O. Pinheiro and Ronan Collobert. Recurrent convolutional neural networks for scene parsing. ICML, 2014. [20] Daniel Munoz, J. Andrew Bagnell, and Martial Hebert. Stacked hierarchical labeling. ECCV, 2010. [21] M. Pawan Kumar and Daphne Koller. Efficiently selecting regions for scene understanding. IEEE CVPR, 2010. [22] Gungor Polatkan and Oncel Tuzel. Compressed inference for probabilistic sequential models. UAI, pages 609–618, 2011. [23] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. ECCV, 2014. 9
|
2014
|
184
|
5,274
|
Spectral Methods Meet EM: A Provably Optimal Algorithm for Crowdsourcing Yuchen Zhang† Xi Chen♯ Dengyong Zhou∗ Michael I. Jordan† †University of California, Berkeley, Berkeley, CA 94720 {yuczhang,jordan}@berkeley.edu ♯New York University, New York, NY 10012 xichen@nyu.edu ∗Microsoft Research, 1 Microsoft Way, Redmond, WA 98052 dengyong.zhou@microsoft.com Abstract The Dawid-Skene estimator has been widely used for inferring the true labels from the noisy labels provided by non-expert crowdsourcing workers. However, since the estimator maximizes a non-convex log-likelihood function, it is hard to theoretically justify its performance. In this paper, we propose a two-stage efficient algorithm for multi-class crowd labeling problems. The first stage uses the spectral method to obtain an initial estimate of parameters. Then the second stage refines the estimation by optimizing the objective function of the Dawid-Skene estimator via the EM algorithm. We show that our algorithm achieves the optimal convergence rate up to a logarithmic factor. We conduct extensive experiments on synthetic and real datasets. Experimental results demonstrate that the proposed algorithm is comparable to the most accurate empirical approach, while outperforming several other recently proposed methods. 1 Introduction With the advent of online crowdsourcing services such as Amazon Mechanical Turk, crowdsourcing has become an appealing way to collect labels for large-scale data. Although this approach has virtues in terms of scalability and immediate availability, labels collected from the crowd can be of low quality since crowdsourcing workers are often non-experts and can be unreliable. As a remedy, most crowdsourcing services resort to labeling redundancy, collecting multiple labels from different workers for each item. Such a strategy raises a fundamental problem in crowdsourcing: how to infer true labels from noisy but redundant worker labels? For labeling tasks with k different categories, Dawid and Skene [8] propose a maximum likelihood approach based on the Expectation-Maximization (EM) algorithm. They assume that each worker is associated with a k × k confusion matrix, where the (l, c)-th entry represents the probability that a randomly chosen item in class l is labeled as class c by the worker. The true labels and worker confusion matrices are jointly estimated by maximizing the likelihood of the observed worker labels, where the unobserved true labels are treated as latent variables. Although this EM-based approach has had empirical success [21, 20, 19, 26, 6, 25], there is as yet no theoretical guarantee for its performance. A recent theoretical study [10] shows that the global optimal solutions of the Dawid-Skene estimator can achieve minimax rates of convergence in a simplified scenario, where the labeling task is binary and each worker has a single parameter to represent her labeling accuracy (referred to as a “one-coin model” in what follows). However, since the likelihood function is non-convex, this guarantee is not operational because the EM algorithm may get trapped in a local optimum. Several alternative approaches have been developed that aim to circumvent the theoretical deficiencies of the EM algorithm, still in the context of the one-coin model [14, 15, 11, 7]. Unfor1 tunately, they either fail to achieve the optimal rates or depend on restrictive assumptions which are hard to justify in practice. We propose a computationally efficient and provably optimal algorithm to simultaneously estimate true labels and worker confusion matrices for multi-class labeling problems. Our approach is a two-stage procedure, in which we first compute an initial estimate of worker confusion matrices using the spectral method, and then in the second stage we turn to the EM algorithm. Under some mild conditions, we show that this two-stage procedure achieves minimax rates of convergence up to a logarithmic factor, even after only one iteration of EM. In particular, given any δ ∈(0, 1), we provide the bounds on the number of workers and the number of items so that our method can correctly estimate labels for all items with probability at least 1−δ. We also establish a lower bound to demonstrate the optimality of this approach. Further, we provide both upper and lower bounds for estimating the confusion matrix of each worker and show that our algorithm achieves the optimal accuracy. This work not only provides an optimal algorithm for crowdsourcing but sheds light on understanding the general method of moments. Empirical studies show that when the spectral method is used as an initialization for the EM algorithm, it outperforms EM with random initialization [18, 5]. This work provides a concrete way to theoretically justify such observations. It is also known that starting from a root-n consistent estimator obtained by the spectral method, one Newton-Raphson step leads to an asymptotically optimal estimator [17]. However, obtaining a root-n consistent estimator and performing a Newton-Raphson step can be demanding computationally. In contrast, our initialization doesn’t need to be root-n consistent, thus a small portion of data suffices to initialize. Moreover, performing one iteration of EM is computationally more attractive and numerically more robust than a Newton-Raphson step especially for high-dimensional problems. 2 Related Work Many methods have been proposed to address the problem of estimating true labels in crowdsourcing [23, 20, 22, 11, 19, 26, 7, 15, 14, 25]. The methods in [20, 11, 15, 19, 14, 7] are based on the generative model proposed by Dawid and Skene [8]. In particular, Ghosh et al. [11] propose a method based on Singular Value Decomposition (SVD) which addresses binary labeling problems under the one-coin model. The analysis in [11] assumes that the labeling matrix is full, that is, each worker labels all items. To relax this assumption, Dalvi et al. [7] propose another SVD-based algorithm which explicitly considers the sparsity of the labeling matrix in both algorithm design and theoretical analysis. Karger et al. propose an iterative algorithm for binary labeling problems under the one-coin model [15] and extend it to multi-class labeling tasks by converting a k-class problem into k −1 binary problems [14]. This line of work assumes that tasks are assigned to workers according to a random regular graph, thus imposing specific constraints on the number of workers and the number of items. In Section 5, we compare our theoretical results with that of existing approaches [11, 7, 15, 14]. The methods in [20, 19, 6] incorporate Bayesian inference into the Dawid-Skene estimator by assuming a prior over confusion matrices. Zhou et al. [26, 25] propose a minimax entropy principle for crowdsourcing which leads to an exponential family model parameterized with worker ability and item difficulty. When all items have zero difficulty, the exponential family model reduces to the generative model suggested by Dawid and Skene [8]. Our method for initializing the EM algorithm in crowdsourcing is inspired by recent work using spectral methods to estimate latent variable models [3, 1, 4, 2, 5, 27, 12, 13]. The basic idea in this line of work is to compute third-order empirical moments from the data and then to estimate parameters by computing a certain orthogonal decomposition of a tensor derived from the moments. Given the special symmetric structure of the moments, the tensor factorization can be computed efficiently using the robust tensor power method [3]. A problem with this approach is that the estimation error can have a poor dependence on the condition number of the second-order moment matrix and thus empirically it sometimes performs worse than EM with multiple random initializations. Our method, by contrast, requires only a rough initialization from the moment of moments; we show that the estimation error does not depend on the condition number (see Theorem 2 (b)). 3 Problem Setup Throughout this paper, [a] denotes the integer set {1, 2, . . . , a} and σb(A) denotes the b-th largest singular value of the matrix A. Suppose that there are m workers, n items and k classes. The true 2 Algorithm 1: Estimating confusion matrices Input: integer k, observed labels zij ∈Rk for i ∈[m] and j ∈[n]. Output: confusion matrix estimates bCi ∈Rk×k for i ∈[m]. (1) Partition the workers into three disjoint and non-empty group G1, G2 and G3. Compute the group aggregated labels Zgj by Eq. (1). (2) For (a, b, c) ∈{(2, 3, 1), (3, 1, 2), (1, 2, 3)}, compute the second and the third order moments c M2 ∈Rk×k, c M3 ∈Rk×k×k by Eq. (2a)-(2d), then compute bC⋄ c ∈Rk×k and c W ∈Rk×k by tensor decomposition: (a) Compute whitening matrix bQ ∈Rk×k (such that bQT c M2 bQ = I) using SVD. (b) Compute eigenvalue-eigenvector pairs {(bαh, bvh)}k h=1 of the whitened tensor c M3( bQ, bQ, bQ) by using the robust tensor power method [3]. Then compute bwh = bα−2 h and bµ⋄ h = ( bQT )−1(bαhbvh). (c) For l = 1, . . . , k, set the l-th column of bC⋄ c by some bµ⋄ h whose l-th coordinate has the greatest component, then set the l-th diagonal entry of c W by bwh. (3) Compute bCi by Eq. (3). label yj of item j ∈[n] is assumed to be sampled from a probability distribution P[yj = l] = wl where {wl : l ∈[k]} are positive values satisfying Pk l=1 wl = 1. Denote by a vector zij ∈Rk the label that worker i assigns to item j. When the assigned label is c, we write zij = ec, where ec represents the c-th canonical basis vector in Rk in which the c-th entry is 1 and all other entries are 0. A worker may not label every item. Let πi indicate the probability that worker i labels a randomly chosen item. If item j is not labeled by worker i, we write zij = 0. Our goal is to estimate the true labels {yj : j ∈[n]} from the observed labels {zij : i ∈[m], j ∈[n]}. In order to obtain an estimator, we need to make assumptions on the process of generating observed labels. Following the work of Dawid and Skene [8], we assume that the probability that worker i labels an item in class l as class c is independent of any particular chosen item, that is, it is a constant over j ∈[n]. Let us denote the constant probability by µilc. Let µil = [µil1 µil2 · · · µilk]T . The matrix Ci = [µi1 µi2 . . . µik] ∈Rk×k is called the confusion matrix of worker i. Besides estimating the true labels, we also want to estimate the confusion matrix for each worker. 4 Our Algorithm In this section, we present an algorithm to estimate confusion matrices and true labels. Our algorithm consists of two stages. In the first stage, we compute an initial estimate of confusion matrices via the method of moments. In the second stage, we perform the standard EM algorithm by taking the result of the Stage 1 as an initialization. 4.1 Stage 1: Estimating Confusion Matrices Partitioning the workers into three disjoint and non-empty groups G1, G2 and G3, the outline of this stage is the following: we use the spectral method to estimate the averaged confusion matrices for the three groups, then utilize this intermediate estimate to obtain the confusion matrix of each individual worker. In particular, for g ∈{1, 2, 3} and j ∈[n], we calculate the averaged labeling within each group by Zgj := 1 |Gg| X i∈Gg zij. (1) Denoting the aggregated confusion matrix columns by µ⋄ gl := E(Zgj|yj = l) = 1 |Gg| P i∈Gg πiµil, our first step is to estimate C⋄ g := [µ⋄ g1, µ⋄ g2, . . . , µ⋄ gk] and to estimate the distribution of true labels 3 W := diag(w1, w2, . . . , wk). The following proposition shows that we can solve for C⋄ g and W from the moments of {Zgj}. Proposition 1 (Anandkumar et al. [3]). Assume that the vectors {µ⋄ g1, µ⋄ g2, . . . , µ⋄ gk} are linearly independent for each g ∈{1, 2, 3}. Let (a, b, c) be a permutation of {1, 2, 3}. Define Z′ aj := E[Zcj ⊗Zbj] (E[Zaj ⊗Zbj])−1 Zaj, Z′ bj := E[Zcj ⊗Zaj] (E[Zbj ⊗Zaj])−1 Zbj, M2 := E[Z′ aj ⊗Z′ bj] and M3 := E[Z′ aj ⊗Z′ bj ⊗Zcj]; then we have M2 = Pk l=1 wl µ⋄ cl ⊗µ⋄ cl and M3 = Pk l=1 wl µ⋄ cl ⊗µ⋄ cl ⊗µ⋄ cl. Since we only have finite samples, the expectations in Proposition 1 have to be approximated by empirical moments. In particular, they are computed by averaging over indices j = 1, 2, . . . , n. For each permutation (a, b, c) ∈{(2, 3, 1), (3, 1, 2), (1, 2, 3)}, we compute bZ′ aj := 1 n n X j=1 Zcj ⊗Zbj 1 n n X j=1 Zaj ⊗Zbj −1 Zaj, (2a) bZ′ bj := 1 n n X j=1 Zcj ⊗Zaj 1 n n X j=1 Zbj ⊗Zaj −1 Zbj, (2b) c M2 := 1 n n X j=1 bZ′ aj ⊗bZ′ bj, (2c) c M3 := 1 n n X j=1 bZ′ aj ⊗bZ′ bj ⊗Zcj. (2d) The statement of Proposition 1 suggests that we can recover the columns of C⋄ c and the diagonal entries of W by operating on the moments c M2 and c M3. This is implemented by the tensor factorization method in Algorithm 1. In particular, the tensor factorization algorithm returns a set of vectors {(bµ⋄ h, bwh) : h = 1, . . . , k}, where each (bµ⋄ h, bwh) estimates a particular column of C⋄ c (for some µ⋄ cl) and a particular diagonal entry of W (for some wl). It is important to note that the tensor factorization algorithm doesn’t provide a one-to-one correspondence between the recovered column and the true columns of C⋄ c . Thus, bµ⋄ 1, . . . , bµ⋄ k represents an arbitrary permutation of the true columns. To discover the index correspondence, we take each bµ⋄ h and examine its greatest component. We assume that within each group, the probability of assigning a correct label is always greater than the probability of assigning any specific incorrect label. This assumption will be made precise in the next section. As a consequence, if bµ⋄ h corresponds to the l-th column of C⋄ c , then its l-th coordinate is expected to be greater than other coordinates. Thus, we set the l-th column of bC⋄ c to some vector bµ⋄ h whose l-th coordinate has the greatest component (if there are multiple such vectors, then randomly select one of them; if there is no such vector, then randomly select a bµ⋄ h). Then, we set the l-th diagonal entry of c W to the scalar bwh associated with bµ⋄ h. Note that by iterating over (a, b, c) ∈{(2, 3, 1), (3, 1, 2), (1, 2, 3)}, we obtain bC⋄ c for c = 1, 2, 3 respectively. There will be three copies of c W estimating the same matrix W—we average them for the best accuracy. In the second step, we estimate each individual confusion matrix Ci. The following proposition shows that we can recover Ci from the moments of {zij}. See [24] for the proof. Proposition 2. For any g ∈{1, 2, 3} and any i ∈Gg, let a ∈{1, 2, 3}\{g} be one of the remaining group index. Then πiCiW(C⋄ a)T = E[zijZT aj]. Proposition 2 suggests a plug-in estimator for Ci. We compute bCi using the empirical approximation of E[zijZT aj] and using the matrices bC⋄ a, bC⋄ b , c W obtained in the first step. Concretely, we calculate bCi := normalize 1 n n X j=1 zijZT aj c W( bC⋄ a)T −1 , (3) 4 where the normalization operator rescales the matrix columns, making sure that each column sums to one. The overall procedure for Stage 1 is summarized in Algorithm 1. 4.2 Stage 2: EM algorithm The second stage is devoted to refining the initial estimate provided by Stage 1. The joint likelihood of true label yj and observed labels zij, as a function of confusion matrices µi, can be written as L(µ; y, z) := n Y j=1 m Y i=1 k Y c=1 (µiyjc)I(zij=ec). By assuming a uniform prior over y, we maximize the marginal log-likelihood function ℓ(µ) := log(P y∈[k]n L(µ; y, z)). We refine the initial estimate of Stage 1 by maximizing the objective function, which is implemented by the Expectation Maximization (EM) algorithm. The EM algorithm takes the values {bµilc} provided as output by Stage 1 as initialization, then executes the following E-step and M-step for at least one round. E-step Calculate the expected value of the log-likelihood function, with respect to the conditional distribution of y given z under the current estimate of µ: Q(µ) := Ey|zf,bµ [log(L(µ; y, z))] = n X j=1 ( k X l=1 bqjl log m Y i=1 k Y c=1 (µilc)I(zij=ec) !) , where bqjl ← exp Pm i=1 Pk c=1 I(zij = ec) log(bµilc) Pk l′=1 exp Pm i=1 Pk c=1 I(zij = ec) log(bµil′c) for j ∈[n], l ∈[k]. (4) M-step Find the estimate bµ that maximizes the function Q(µ): bµilc ← Pn j=1 bqjlI(zij = ec) Pk c′=1 Pn j=1 bqjlI(zij = ec′) for i ∈[m], l ∈[k], c ∈[k]. (5) In practice, we alternatively execute the updates (4) and (5), for one iteration or until convergence. Each update increases the objective function ℓ(µ). Since ℓ(µ) is not concave, the EM update doesn’t guarantee converging to the global maximum. It may converge to distinct local stationary points for different initializations. Nevertheless, as we prove in the next section, it is guaranteed that the EM algorithm will output statistically optimal estimates of true labels and worker confusion matrices if it is initialized by Algorithm 1. 5 Convergence Analysis To state our main theoretical results, we first need to introduce some notation and assumptions. Let wmin := min{wl}k l=1 and πmin := min{πi}m i=1 be the smallest portion of true labels and the most extreme sparsity level of workers. Our first assumption assumes that both wmin and πmin are strictly positive, that is, every class and every worker contributes to the dataset. Our second assumption assumes that the confusion matrices for each of the three groups, namely C⋄ 1, C⋄ 2 and C⋄ 3, are nonsingular. As a consequence, if we define matrices Sab and tensors Tabc for any a, b, c ∈{1, 2, 3} as Sab := k X l=1 wl µ⋄ al ⊗µ⋄ bl = C⋄ aW(C⋄ b )T and Tabc := k X l=1 wl µ⋄ al ⊗µ⋄ bl ⊗µ⋄ cl, then there will be a positive scalar σL such that σk(Sab) ≥σL > 0. Our third assumption assumes that within each group, the average probability of assigning a correct label is always higher than the average probability of assigning any incorrect label. To make this 5 statement rigorous, we define a quantity κ := min g∈{1,2,3} min l∈[k] min c∈[k]\{l}{µ⋄ gll −µ⋄ glc} indicating the smallest gap between diagonal entries and non-diagonal entries in the same confusion matrix column. The assumption requires κ being strictly positive. Note that this assumption is group-based, thus does not assume the accuracy of any individual worker. Finally, we introduce a quantity that measures the average ability of workers in identifying distinct labels. For two discrete distributions P and Q, let DKL (P, Q) := P i P(i) log(P(i)/Q(i)) represent the KL-divergence between P and Q. Since each column of the confusion matrix represents a discrete distribution, we can define the following quantity: D = min l̸=l′ 1 m m X i=1 πiDKL (µil, µil′) . (6) The quantity D lower bounds the averaged KL-divergence between two columns. If D is strictly positive, it means that every pair of labels can be distinguished by at least one subset of workers. As the last assumption, we assume that D is strictly positive. The following two theorems characterize the performance of our algorithm. We split the convergence analysis into two parts. Theorem 1 characterizes the performance of Algorithm 1, providing sufficient conditions for achieving an arbitrarily accurate initialization. We provide the proof of Theorem 1 in the long version of this paper [24]. Theorem 1. For any scalar δ > 0 and any scalar ϵ satisfying ϵ ≤min n 36κk πminwminσL , 2 o , if the number of items n satisfies n = Ω k5 log((k + m)/δ) ϵ2π2 minw2 minσ13 L , then the confusion matrices returned by Algorithm 1 are bounded as ∥bCi −Ci∥∞≤ϵ for all i ∈[m], with probability at least 1 −δ. Here, ∥· ∥∞denotes the element-wise ℓ∞-norm of a matrix. Theorem 2 characterizes the error rate in Stage 2. It states that when a sufficiently accurate initialization is taken, the updates (4) and (5) refine the estimates bµ and by to the optimal accuracy. See the long version of this paper [24] for the proof. Theorem 2. Assume that there is a positive scalar ρ such that µilc ≥ρ for all (i, l, c) ∈[m] × [k]2. For any scalar δ > 0, if confusion matrices bCi are initialized in a manner such that ∥bCi −Ci∥∞≤α := min ρ 2, ρD 16 for all i ∈[m], (7) and the number of workers m and the number of items n satisfy m = Ω log(1/ρ) log(kn/δ) + log(mn) D and n = Ω log(mk/δ) πminwminα2 , then, for bµ and bq obtained by iterating (4) and (5) (for at least one round), with probability at least 1 −δ, (a) Letting byj = arg maxl∈[k] bqjl, we have that byj = yj holds for all j ∈[n]. (b) ∥bµil −µil∥2 2 ≤48 log(2mk/δ) πiwln holds for all (i, l) ∈[m] × [k]. In Theorem 2, the assumption that all confusion matrix entries are lower bounded by ρ > 0 is somewhat restrictive. For datasets violating this assumption, we enforce positive confusion matrix entries by adding random noise: Given any observed label zij, we replace it by a random label in {1, ..., k} with probability kρ. In this modified model, every entry of the confusion matrix is lower bounded by ρ, so that Theorem 2 holds. The random noise makes the constant D smaller than its original value, but the change is minor for small ρ. 6 Dataset name # classes # items # workers # worker labels Bird 2 108 39 4,212 RTE 2 800 164 8,000 TREC 2 19,033 762 88,385 Dog 4 807 52 7,354 Web 5 2,665 177 15,567 Table 1: Summary of datasets used in the real data experiment. To see the consequence of the convergence analysis, we take error rate ϵ in Theorem 1 equal to the constant α defined in Theorem 2. Then we combine the statements of the two theorems. This shows that if we choose the number of workers m and the number of items n such that m = eΩ 1 D and n = eΩ k5 π2 minw2 minσ13 L min{ρ2, (ρD)2} ; (8) that is, if both m and n are lower bounded by a problem-specific constant and logarithmic terms, then with high probability, the predictor by will be perfectly accurate, and the estimator bµ will be bounded as ∥bµil −µil∥2 2 ≤e O(1/(πiwln)). To show the optimality of this convergence rate, we present the following minimax lower bounds. Again, see [24] for the proof. Theorem 3. There are universal constants c1 > 0 and c2 > 0 such that: (a) For any {µilc}, {πi} and any number of items n, if the number of workers m ≤1/(4D), then inf by sup v∈[k]n E h n X j=1 I(byj ̸= yj) {µilc}, {πi}, y = v i ≥c1n. (b) For any {wl}, {πi}, any worker-item pair (m, n) and any pair of indices (i, l) ∈[m] × [k], we have inf bµ sup µ∈Rm×k×k E h ∥bµil −µil∥2 2 {wl}, {πi} i ≥c2 min 1, 1 πiwln . In part (a) of Theorem 3, we see that the number of workers should be at least 1/(4D), otherwise any predictor will make many mistakes. This lower bound matches our sufficient condition on the number of workers m (see Eq. (8)). In part (b), we see that the best possible estimate for µil has Ω(1/(πiwln)) mean-squared error. It verifies the optimality of our estimator bµil. It is worth noting that the constraint on the number of items n (see Eq. (8)) might be improvable. In real datasets we usually have n ≫m so that the optimality for m is more important than for n. It is worth contrasting our convergence rate with existing algorithms. Ghosh et al. [11] and Dalvi et al. [7] proposed consistent estimators for the binary one-coin model. To attain an error rate δ, their algorithms require m and n scaling with 1/δ2, while our algorithm only requires m and n scaling with log(1/δ). Karger et al. [15, 14] proposed algorithms for both binary and multi-class problems. Their algorithm assumes that workers are assigned by a random regular graph. Moreover, their analysis assumes that the limit of number of items goes to infinity, or that the number of workers is many times the number of items. Our algorithm no longer requires these assumptions. We also compare our algorithm with the majority voting estimator, where the true label is simply estimated by a majority vote among workers. Gao and Zhou [10] showed that if there are many spammers and few experts, the majority voting estimator gives almost a random guess. In contrast, our algorithm only requires mD = eΩ(1) to guarantee good performance. Since mD is the aggregated KL-divergence, a small number of experts are sufficient to ensure it is large enough. 6 Experiments In this section, we report the results of empirical studies comparing the algorithm we propose in Section 4 (referred to as Opt-D&S) with a variety of existing methods which are also based on the generative model of Dawid and Skene. Specifically, we compare to the Dawid & Skene estimator 7 10 −6 10 −5 10 −4 10 −3 10 −2 10 −1 0.08 0.1 0.12 0.14 0.16 0.18 0.2 0.22 Threshold Label prediction error Opt−D&S: 1st iteration Opt−D&S: 50th iteration MV−D&S: 1st iteration MV−D&S: 50th iteration 10 −6 10 −5 10 −4 10 −3 10 −2 10 −1 0.15 0.16 0.17 0.18 0.19 0.2 0.21 Threshold Label prediction error Opt−D&S: 1st iteration Opt−D&S: 50th iteration MV−D&S: 1st iteration MV−D&S: 50th iteration 10 −6 10 −5 10 −4 10 −3 10 −2 10 −1 0.15 0.2 0.25 0.3 0.35 Threshold Label prediction error Opt−D&S: 1st iteration Opt−D&S: 50th iteration MV−D&S: 1st iteration MV−D&S: 50th iteration (a) RTE (b) Dog (c) Web Figure 1: Comparing MV-D&S and Opt-D&S with different thresholding parameter ∆. The label prediction error is plotted after the 1st EM update and after convergence. Opt-D&S MV-D&S Majority Voting KOS Ghosh-SVD EigenRatio Bird 10.09 11.11 24.07 11.11 27.78 27.78 RTE 7.12 7.12 10.31 39.75 49.13 9.00 TREC 29.80 30.02 34.86 51.96 42.99 43.96 Dog 16.89 16.66 19.58 31.72 – – Web 15.86 15.74 26.93 42.93 – – Table 2: Error rate (%) in predicting true labels on real data. initialized by majority voting (referred to as MV-D&S), the pure majority voting estimator, the multi-class labeling algorithm proposed by Karger et al. [14] (referred to as KOS), the SVD-based algorithm proposed by Ghosh et al. [11] (referred to as Ghost-SVD) and the “Eigenvalues of Ratio” algorithm proposed by Dalvi et al. [7] (referred to as EigenRatio). The evaluation is made on five real datasets. We compare the crowdsourcing algorithms on three binary tasks and two multi-class tasks. Binary tasks include labeling bird species [22] (Bird dataset), recognizing textual entailment [21] (RTE dataset) and assessing the quality of documents in the TREC 2011 crowdsourcing track [16] (TREC dataset). Multi-class tasks include labeling the breed of dogs from ImageNet [9] (Dog dataset) and judging the relevance of web search results [26] (Web dataset). The statistics for the five datasets are summarized in Table 1. Since the Ghost-SVD algorithm and the EigenRatio algorithm work on binary tasks, they are evaluated only on the Bird, RTE and TREC datasets. For the MV-D&S and the Opt-D&S methods, we iterate their EM steps until convergence. Since entries of the confusion matrix are positive, we find it helpful to incorporate this prior knowledge into the initialization stage of the Opt-D&S algorithm. In particular, when estimating the confusion matrix entries by Eq. (3), we add an extra checking step before the normalization, examining if the matrix components are greater than or equal to a small threshold ∆. For components that are smaller than ∆, they are reset to ∆. The default choice of the thresholding parameter is ∆= 10−6. Later, we will compare the Opt-D&S algorithm with respect to different choices of ∆. It is important to note that this modification doesn’t change our theoretical result, since the thresholding is not needed in case that the initialization error is bounded by Theorem 1. Table 2 summarizes the performance of each method. The MV-D&S and the Opt-D&S algorithms consistently outperform the other methods in predicting the true label of items. The KOS algorithm, the Ghost-SVD algorithm and the EigenRatio algorithm yield poorer performance, presumably due to the fact that they rely on idealized assumptions that are not met by the real data. In Figure 1, we compare the Opt-D&S algorithm with respect to different thresholding parameters ∆∈{10−i}6 i=1. We plot results for three datasets (RET, Dog, Web), where the performance of MV-D&S is equal to or slightly better than that of Opt-D&S. The plot shows that the performance of the Opt-D&S algorithm is stable after convergence. But at the first EM iterate, the error rates are more sensitive to the choice of ∆. A proper choice of ∆makes Opt-D&S outperform MV-D&S. The result suggests that a proper initialization combined with one EM iterate is good enough for the purposes of prediction. In practice, the best choice of ∆can be obtained by cross validation. 8 References [1] A. Anandkumar, D. P. Foster, D. Hsu, S. M. Kakade, and Y.-K. Liu. A spectral algorithm for latent Dirichlet allocation. arXiv preprint: 1204.6703, 2012. [2] A. Anandkumar, R. Ge, D. Hsu, and S. M. Kakade. A tensor spectral approach to learning mixed membership community models. In Annual Conference on Learning Theory, 2013. [3] A. Anandkumar, R. Ge, D. Hsu, S. M. Kakade, and M. Telgarsky. Tensor decompositions for learning latent variable models. arXiv preprint:1210.7559, 2012. [4] A. Anandkumar, D. Hsu, and S. M. Kakade. A method of moments for mixture models and hidden Markov models. In Annual Conference on Learning Theory, 2012. [5] A. T. Chaganty and P. Liang. Spectral experts for estimating mixtures of linear regressions. arXiv preprint: 1306.3729, 2013. [6] X. Chen, Q. Lin, and D. Zhou. Optimistic knowledge gradient policy for optimal budget allocation in crowdsourcing. In Proceedings of ICML, 2013. [7] N. Dalvi, A. Dasgupta, R. Kumar, and V. Rastogi. Aggregating crowdsourced binary ratings. In Proceedings of World Wide Web Conference, 2013. [8] A. P. Dawid and A. M. Skene. Maximum likelihood estimation of observer error-rates using the EM algorithm. Journal of the Royal Statistical Society, Series C, pages 20–28, 1979. [9] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In IEEE CVPR, 2009. [10] C. Gao and D. Zhou. Minimax optimal convergence rates for estimating ground truth from crowdsourced labels. arXiv preprint arXiv:1310.5764, 2014. [11] A. Ghosh, S. Kale, and P. McAfee. Who moderates the moderators? crowdsourcing abuse detection in user-generated content. In Proceedings of the ACM Conference on Electronic Commerce, 2011. [12] D. Hsu, S. M. Kakade, and T. Zhang. A spectral algorithm for learning hidden Markov models. Journal of Computer and System Sciences, 78(5):1460–1480, 2012. [13] P. Jain and S. Oh. Learning mixtures of discrete product distributions using spectral decompositions. arXiv preprint:1311.2972, 2013. [14] D. R. Karger, S. Oh, and D. Shah. Efficient crowdsourcing for multi-class labeling. In ACM SIGMETRICS, 2013. [15] D. R. Karger, S. Oh, and D. Shah. Budget-optimal task allocation for reliable crowdsourcing systems. Operations Research, 62(1):1–24, 2014. [16] M. Lease and G. Kazai. Overview of the TREC 2011 crowdsourcing track. In Proceedings of TREC 2011, 2011. [17] E. Lehmann and G. Casella. Theory of Point Estimation. Springer, 2nd edition, 2003. [18] P. Liang. Partial information from spectral methods. NIPS Spectral Learning Workshop, 2013. [19] Q. Liu, J. Peng, and A. T. Ihler. Variational inference for crowdsourcing. In NIPS, 2012. [20] V. C. Raykar, S. Yu, L. H. Zhao, G. H. Valadez, C. Florin, L. Bogoni, and L. Moy. Learning from crowds. Journal of Machine Learning Research, 11:1297–1322, 2010. [21] R. Snow, B. O’Connor, D. Jurafsky, and A. Y. Ng. Cheap and fast—but is it good? evaluating non-expert annotations for natural language tasks. In Proceedings of EMNLP, 2008. [22] P. Welinder, S. Branson, S. Belongie, and P. Perona. The multidimensional wisdom of crowds. In NIPS, 2010. [23] J. Whitehill, P. Ruvolo, T. Wu, J. Bergsma, and J. R. Movellan. Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. In NIPS, 2009. [24] Y. Zhang, X. Chen, D. Zhou, and M. I. Jordan. Spectral methods meet EM: A provably optimal algorithm for crowdsourcing. arXiv preprint arXiv:1406.3824, 2014. [25] D. Zhou, Q. Liu, J. C. Platt, and C. Meek. Aggregating ordinal labels from crowds by minimax conditional entropy. In Proceedings of ICML, 2014. [26] D. Zhou, J. C. Platt, S. Basu, and Y. Mao. Learning from the wisdom of crowds by minimax entropy. In NIPS, 2012. [27] J. Zou, D. Hsu, D. Parkes, and R. Adams. Contrastive learning using spectral methods. In NIPS, 2013. 9
|
2014
|
185
|
5,275
|
Fairness in Multi-Agent Sequential Decision-Making Chongjie Zhang and Julie A. Shah Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, MA 02139 {chongjie,julie a shah}@csail.mit.edu Abstract We define a fairness solution criterion for multi-agent decision-making problems, where agents have local interests. This new criterion aims to maximize the worst performance of agents with a consideration on the overall performance. We develop a simple linear programming approach and a more scalable game-theoretic approach for computing an optimal fairness policy. This game-theoretic approach formulates this fairness optimization as a two-player zero-sum game and employs an iterative algorithm for finding a Nash equilibrium, corresponding to an optimal fairness policy. We scale up this approach by exploiting problem structure and value function approximation. Our experiments on resource allocation problems show that this fairness criterion provides a more favorable solution than the utilitarian criterion, and that our gametheoretic approach is significantly faster than linear programming. Introduction Factored multi-agent MDPs [4] offer a powerful mathematical framework for studying multi-agent sequential decision problems in the presence of uncertainty. Its compact representation allows us to model large multi-agent planning problems and to develop efficient methods for solving them. Existing approaches to solving factored multi-agent MDPs [4] have focused on the utilitarian solution criterion, i.e., maximizing the sum of individual utilities. The computed utilitarian solution is optimal from the perspective of the system where the performance is additive. However, as the utilitarian solution often discriminates against some agents, it is not desirable for many practical applications where agents have their own interests and fairness is expected. For example, in manufacturing plants, resources need to be fairly and dynamically allocated to work stations on assembly lines in order to maximize the throughput; in telecommunication systems, wireless bandwidth needs to be fairly allocated to avoid “unhappy” customers; in transportation systems, traffic lights are controlled so that traffic flow is balanced. In this paper, we define a fairness solution criterion, called regularized maximin fairness, for multiagent MDPs. This criterion aims to maximize the worst performance of agents with a consideration on the overall performance. We show that its optimal solution is Pareto-efficient. In this paper, we will focus on centralized joint policies, which are sensible for many practical resource allocation problems. We develop a simple linear programming approach and a more scalable game-theoretic approach for computing an optimal fairness policy. This game-theoretic approach formulates this fairness optimization for factored multi-agent MDPs as a two-player, zero-sum game. Inspired by theoretical results that two-player games tend to have a Nash equilibrium (NE) with a small support [7], we develop an iterative algorithm that incrementally solves this game by starting with a small subgame. This game-theoretic approach can scale up to large problems by relaxing the termination condition, exploiting problem structure in factored multi-agent MDPs, and applying value function approximation. Our experiments on a factory resource allocation problem show that this 1 fairness criterion provides a more favorable solution than the utilitarian criterion [4], and our gametheoretic approach is significantly faster than linear programming. Multi-agent decision-making model and its fairness solution We are interested in multi-agent sequential decision-making problems, where agents have their own interests. We assume that agents are cooperating. Cooperation can be proactive, e.g., sharing resources with other agents to sustain cooperation that benefits all agents, or passive, where agents’ actions are controlled by a thirty party, as with centralized resource allocation. We use a factored multi-agent Markov decision processes (MDP) to model multi-agent sequential decision-making problems [4]. A factored multi-agent MDP is defined by a tuple ⟨I, X, A, T, {Ri}i∈I, b⟩, where I = {1, . . . , n} is a set of agent indices. X is a state space represented by a set of state variables X = {X1, . . . , Xm}. A state is defined by a vector x of value assignments to each state variable. We assume the domain of each variable is finite. A = ×i∈IAi is a finite set of joint actions, where Ai is a finite set of actions available for agent i. The joint action a = ⟨a1, . . . , an⟩is defined by a vector of individual action choices. T is the transition model. T(x′|x, a) specifies the probability of transitioning to the next state x′ after a joint action a is taken in the current state x. As in [4], we assume that the transition model can be factored and compactly represented by a dynamic Bayesian network (DBN). Ri(xi, ai) is a local reward function of agent i, which is defined on a small set of variables xi ⊆X and ai ⊆A. b is the initial distribution of states. This model allows us to exploit problem structures to represent exponentially-large multi-agent MDPs compactly. Unlike factored MDPs defined in [4], which have one single reward function represented by a sum of partial reward functions, this multi-agent model has a local reward function for each agent. From the multi-agent perspective, existing approaches to factored MDPs [4] essentially aim to compute a control policy that maximizes the utilitarian criterion (i.e., the sum of individual utilities). As the utilitarian criterion often provides a solution that is not fair or satisfactory for some agents (e.g., as shown in the experiment section), it may not be desirable for problems where agents have local interests. In contrast to the utilitarian criterion, an egalitarian criterion, called maximin fairness, has been studied in networking [1, 9], where resources are allocated to optimize the worst performance. This egalitarian criterion exploits the maximin principle in Rawlsian theory of justice [14], maximizing the benefits of the least-advantaged members of society. In the following, we will define a fairness solution criterion for multi-agent MDPs by adapting and combining the maximin fairness criterion and the utilitarian criterion. Under this new criterion, an optimal policy for multi-agent MDPs aims to maximize the worst performance of agents with a consideration on the overall performance. A joint stochastic policy π : X × A →ℜis a function that returns the probability of taking joint action a ∈A for any given state x ∈X. The utility of agent i under a joint policy π is defined as its infinite-horizon, total discounted reward, which is denoted by ψ(i, π) = E[ ∞ X t=0 λtRi(xt, at)|π, b]. (1) where λ is the discount factor, the expectation operator E(·) averages over stochastic action selection and state transition, b is the initial state distribution, and xt and at are the state and the joint action taken at time t, respectively. To achieve both fairness and efficiency, our goal for a given multi-agent MDP is to find a joint control policy π∗, called a regularized maximin fairness policy, that maximizes the following objective value function V (π) = min i∈I ψ(i, π) + ϵ n X i∈I ψ(i, π), (2) 2 where n = |I| is the number of agents and ϵ is a strictly positive real number, chosen to be arbitrary small. 1 This fairness objective function can be seen as a lexicographic aggregation of the egalitarian criterion (min) and utilitarian criterion (sum of utilities) with priority to egalitarianism. This fairness criterion can be also seen as a particular instance of the weighted Tchebycheff distance with respect to a reference point, a classical secularization function used to generate compromise solutions in multi-objective optimization [16]. Note that the optimal policy under the egalitarian (or maximin) criterion alone may not be Pareto efficient, but the optimal policy under this regularized fairness criterion is guaranteed to be Pareto efficient. Definition 1. A joint control policy π is said to be Pareto efficient if and only if there does not exist another joint policy π′ such that the utility is at least as high for all agents and strictly higher for at least one agent, that is, ∄π′, ∀i, ψ(i, π′) ≥ψ(i, π) ∧∃i, ψ(i, π′) > ψ(i, π). Proposition 1. A regularized maximin fairness policy π∗is Pareto efficient. Proof. We can prove by contradiction. Assume regularized maximin fairness policy π∗is not Pareto efficient. Then there must exist a policy π such that ∀i, ψ(i, π) ≥ψ(i, π∗) ∧∃i, ψ(i, π) > ψ(i, π∗). Then V π = mini∈I ψ(i, π)+ ϵ n P i∈I ψ(i, π) > mini∈I ψ(i, π∗)+ ϵ n P i∈I ψ(i, π∗) = V π∗, which contradicts the pre-condition that π∗is a regularized maximin fairness policy. In this paper, we will mainly focus on centralized policies for multi-agent MDPs. This focus is sensible because we assume that, although agents have local interests, they are also willing to cooperate. Many practical problems modeled by multi-agent MDPs use centralized policies to achieve fairness, e.g., network bandwidth allocation by telecommunication companies, traffic congestion control, public service allocation, and, more generally, fair resource allocation under uncertainty. On the other hand, we can derive decentralized policies for individual agents from a maximin fairness policy π∗by marginalizing it over the actions of all other agents. If the maximin fairness policy is deterministic, then the derived decentralized policy profile is also optimal under the regularized maximin fairness criterion. Although such a guarantee generally does not hold for stochastic policies, as indicated by the following proposition, the derived decentralized policy is a bounded solution in the space of decentralized policies under the regularized maximin fairness criterion. Proposition 2. Let πc∗be an optimal centralized policy and πdec∗be an optimal decentralized policy profile under the regularized maximin fairness criterion. Let πdec be an decentralized policy profile derived from πc∗by marginalization. The values of policy πc∗and πdec provides bounds for the value of πdec∗, that is, V (πc∗) ≥V (πdec∗) ≥V (πdec). The proof of this proposition is quite straightforward. The first inequality holds because any decentralized policy profile can be converted to a centralized policy by product, and the second inequality holds because πdec∗is an optimal decentralized policy profile. When bounds provided by V (πc∗) and V (πdec) are close, we can conclude that πdec is almost an optimal decentralized policy profile under the regularized maximin fairness criterion. In this paper, we are primarily concerned with total discounted rewards for an infinite horizon, but the definition, analysis, and computation of regularized maximin fairness can be adapted to a finite horizon with an undiscounted sum of rewards. In the next section, we will present approaches to computing the regularized maximin fairness policy for infinite-horizon multi-agent MDPs. Computing Regularized Maximin Fairness Policies In this section, we present two approaches to computing regularized maximin fairness policies for multi-agent MDPs: a simple linear programming approach and a game theoretic approach. The former approach is adapted from the linear programming formulation of single-agent MDPs. The latter approach formulates this fairness problem as a two-player zero-sum game and employs an iterative search method for finding a Nash equilibrium that contains a regularized maximin fairness policy. This iterative algorithm allows us to scale up to large problems by exploiting structures in multi-agent MDPs and value function approximation and employing a relaxed termination condition. 1In some applications, we may choose proper large ϵ to trade off fairness and the overall performance. 3 A linear programming approach For a multi-agent MDP, given a joint policy and the initial state distribution, frequencies of visiting state-action pairs are uniquely determined. We use fπ(x, a) to denote the total discounted probability, under the policy π and initial state distribution b, that the system occupies state x and chooses action a. Using this frequency function, we can rewrite the expected total discount rewards as follows, using fπ(x, a): ψ(i, π) = X x X a fπ(x, a)Ri(xi, ai), (3) where xi ⊆x and ai ⊆a. Since the dynamics of a multi-agent MDPs is Markovian, as it is for the single-agent MDP, we can adapt the linear programming formulation of single-agent MDPs for finding an optimal centralized policy for multi-agent MDPs under the regularized maximin fairness criterion as follows: max f min i∈I X x X a f(x, a)Ri(xi, ai) + ϵ n X i∈I X x X a f(x, a)Ri(xi, ai) s.t. X a f(x′, a) = b(x′) + X x X a λT(x′|x, a)f(x, a), ∀x′ ∈X f(x, a) ≥0, for all a ∈A and x ∈X. (4) Constraints are included to ensure that f(x, a) is well-defined. The first set of constraints require that the probability of visiting state x′ is equal to the initial probability of state x′ plus the sum of all probabilities of entering into state s′. We linearize this program by introducing another variable z, which represents the minimum expected total discounted reward among all agents, as follows: max f z + ϵ n X i∈I X x X a f(x, a)Ri(xi, ai) s.t. z ≤ X x X a f(x, a)Ri(xi, ai), ∀i ∈I X a f(x′, a) = b(x′) + X x X a λT(x′|x, a)f(x, a), ∀x′ ∈X f(x, a) ≥0, for all a ∈A and x ∈X. (5) We can employ existing linear programming solvers (e.g., the simplex method) to compute an optimal solution f ∗for problem (5) and derive a policy π∗from f ∗by normalization: π(x, a) = f(x, a) P a∈A f(x, a). (6) Using Theorem 6.9.1 in [13], we can easily show that the derived policy π∗is optimal under the regularized maximin fairness criterion. This linear programming approach is simple, but is not scalable for multi-agent MDPs with large state spaces or large numbers of agents. This is because the number of constraints of the linear program is |X| + |I|. In the next sections, we present a more scalable game-theoretic approach for large multi-agent MDPs. A game-theoretic approach Since the fairness objective function in (2) can be turned to a maximin function, inspired by von Neumann’s minimax theorem, we can formulate this optimization problem as a two-player zerosum game. Motivated by theoretical results that two-player games tend to have a Nash equilibrium (NE) with a small support, we develop an iterative algorithm for solving zero-sum games. Let ΠS and ΠD be the set of stochastic Markovian policies and deterministic Markovian policies, respectively. As shown in [13], every stochastic policy can be represented by a convex combination of deterministic policies and every convex combination of deterministic policies corresponds to a stochastic policy. Specifically, for any stochastic policy πs ∈Πs, we can represent πs = P i piπd i using some set of {πd 1, . . . , πd k} ⊂ΠD with probability distribution p. 4 Algorithm 1: An iterative approach to computing the regularized maximin fairness policy 1 Initialize a zero-sum game G(¯ΠD, ¯I) with small subsets ¯ΠD s ⊂ΠD and ¯I ⊂I ; 2 repeat 3 (p∗, q∗, V ∗) ←compute a Nash equilibrium of game G(¯ΠD, ¯I) ; 4 (πd, Vp) ←compute the best-response deterministic policy against q∗in G(ΠD, I) ; 5 if Vp > V ∗then ¯ΠD ←¯ΠD ∪{πd} ; 6 (i, Vq) ←compute the best response against p∗among all agents I; 7 if Vq < V ∗then ¯I ←¯I ∪{i} ; 8 if G(¯ΠD, ¯I) changes then expand its payoff matrix with U(πd, i) for new pairs (πd, i) ; 9 until game G(¯ΠD, ¯I) converges; 10 return the regularized maximin fairness policy πs p∗= p∗· ¯ΠD ; Let U(π, i) = ψ(i, π) + ϵ n P j∈I ψ(j, π). We can construct a two-player zero-sum game G(ΠD, I) as follows: the maximizing player, who aims to maximize the value of the game, chooses a deterministic policy πd from ΠD; the minimizing player, who aims to minimizing the value of the game, chooses an agent indexed by i in multi-agent MDPs from I; and the payoff matrix has an entry U(πd, i) for each pair πd ∈ΠD and i ∈I. The following proposition shows that we can compute the regularized minimax fairness policy by solving G(ΠD, I). Proposition 3. Let the strategy profile (p∗, q∗) be a NE of the game G(ΠD, I) and the stochastic policy πs p∗which is derived from (p∗, q∗) with πs p∗(x, a) = P i p∗ i πd i (x, a), where p∗ i is the ith component of p∗, i.e., the probability of choosing the deterministic policy πd i ∈ΠD. Then πs p∗is a regularized maximin fairness policy, Proof. According to von Neumann’s minimax theorem, p∗is also the maximin strategy for the zerosum game G(ΠD, I). min i U(πs p∗, i) = min i X j p∗ jU(πd j , i) (let πs p∗= X j p∗ jπd j ) = min q X j X i p∗ jqiU(πd j , i) (there always exists a pure best response strategy) = max p min q X j X i pjqiU(πd j , i) (p∗is the maximin strategy) ≥ max p min i X j pjU(πd j , i) (consider i as a pure strategy) = max πp min i U(πp, i) (let πp = X j pjπd j ) By definition, πs p∗is a regularized maximin fairness policy. As the game G(ΠD, I) is usually extremely large and computing the payoff matrix of the game G(ΠD, I) is also non-trivial, it is impossible to directly use linear programming to solve this game. On the other hand, existing work, such as [7] that analyzes the theoretical properties of the NE of games drawn from a particular distribution, shows that support sizes of Nash equilibria tend to be balanced and small, especially for n = 2. Prior work [11] demonstrated that it is beneficial to exploit these results in finding a NE, especially in 2-player games. Inspired by these results, we develop an iterative method to compute a fairness policy, as shown in Algorithm 1. Intuitively, Algorithm 1 works as follows. It starts by computing a NE for a small subgame (Line 3) and then checks whether this NE is also a NE of the whole game (Line 4-7); if not, it expands the subgame and repeats this process until a NE is found for the whole game. Line 1initializes a small sub game of the original game, which can be arbitrary. In our experiments, it is initialized with a random agent and a policy maximizing this agent’s utility. Line 3 solves the twoplayer zero-sum game using linear programming or any other suitable technique. V ∗is the maximin 5 value of this subgame. The best response problem in Line 4 is to find a deterministic policy π that maximizes the following payoff: U(π, q∗) = X i∈¯I q∗ i U(π, i) = X i∈¯I q∗ i [ψ(i, π) + ϵ n X j∈I ψ(j, π)] = X i∈I (q∗ i + ϵ n)ψ(i, π) Solving this optimization problem is equivalent to finding the optimal policy of a regular MDP with a reward function R(x, a) = P i∈I(q∗ i + ϵ n)Ri(xi, ai). We can use the dual linear programming approach [13] for this MDP, which outputs the visitation frequency function fπd(x, a) representing the optimal policy. This representation facilitates the computation of the payoff U(πd i , i) using Equation 3. Vp = P i q∗ i U(πd, i) is the maximizing player’s utility of its best response against q∗. Line 5 checks if the best response πd is strictly better than p∗. If this is true, we can infer that p∗is not the best response against q∗in the whole game and πd must not be in ¯ ΠD, which is then added to ¯ ΠD to expand the subgame. Line 6 finds the minimizing player’s best response against p∗, which minimizes the payoff of the maximizing player. Note that there always exists a pure best response strategy. So we formulate this best response problem as follows: min i∈I U(πp∗, q) = min i∈I X j p∗ jU(πd j , i), (7) where πp∗is the stochastic policy corresponding to probability distribution p∗. We can solve this problem by directly searching for the agent i that yields the minimum utility with linear time complexity. Similar to Line 5, Line 7 checks if the minimizing player strictly preferred i to q∗against p∗ and expands the subgame if needed. This algorithm terminates when the subgame does not change. Proposition 4. Algorithm 1 converges to a regularized maximin fairness policy. Proof. The convergence of this algorithm follows immediately because there exists a finite number of deterministic Markovian policies and agents for a given multi-agent MDP. The algorithm terminates if and only if neither of the If conditions of Line 5 and 7 hold. This situation indicates no player strictly prefers a strategy out of the support of its current strategy, which implies (p∗, q∗) is a NE of the whole game G(¯ΠD, ¯I). Using Proposition 3, we conclude that Algorithm 1 returns a regularized maximin fairness policy. Algorithm 1 shares some similarities with the double oracle algorithm proposed in [8] for iteratively solving zero-sum games. The double oracle method is motivated by Benders decomposition technique, while our iterative algorithm exploits properties of Nash equilibrium, which leads to a more efficient implementation. For example, unlike our algorithm, the double oracle method checks if the computed best response MDP policy exists in the current sub-game by comparison, which is time-consuming for MDP policies with a large state space. Scaling the game-theoretic approach Both linear programming and the game-theoretic approach suffer scalability issues for large problems. In multi-agent MDPs, the state space is exponential with the number of state variables and the action space is exponential with the number of agents. This results in an exponential number of variables and constraints in linear program formulation. In this section, we will investigate methods to scale up the game-theoretic approach. The major bottleneck of the iterative algorithm is the computation of the best response policy (Line 4 in Algorithm 1). As discussed in the previous section, this optimization is equivalent to finding the optimal policy of a regular MDP with reward function R(x, a) = P i(q∗ i + ϵ n)Ri(xi, ai). Due to the exponential state-action space, exact algorithms (e.g., linear programming) are impractical in most cases. Fortunately, this MDP is essentially a factored MDP [4] with a weighted sum of partial reward functions. We can use existing approximate algorithms [4] to solve factored MDPs, which exploit both factored structures in the problem and value function approximation. For example, the approximate linear programming approach for factored MDPs can provide efficient policies with up to an exponential reduction in computation time. 6 #C #R #N Time-LP Time-GT Sol-LP Sol-GT 4 12 7E4 68.22s 11.43s 157.67 154.24 4 20 3E5 22.39m 35.27s 250.59 239.87 5 10 4E5 89.77m 48.56s 104.33 97.48 5 20 6E6 4.98m 189.62 6 18 5E7 43.36m 153.63 Table 1: Performance in sample problems with different cell sizes and total resoureces C MPE Utilitarian Fairness 1 180.41 117.44 250.59 2 198.45 184.20 250.59 3 216.49 290.69 250.59 4 234.53 444.08 250.59 Min 108.22 68.32 157.67 Table 2: A comparison of three criteria in a 4-agent 20-resource problem A few subtleties are worth noting when approximate linear programming is employed. First, the best response’s utility Vp should be computed by evaluating the computed approximate policy against q∗, instead of directly using the value from the approximate value function. Otherwise, the convergence of Algorithm 1 will not be guaranteed. Similarly, the payoff U(πd, i) should be calculated through policy evaluation. Second, existing approximate algorithms for factored MDPs usually output a deterministic policy πd(x) that is not represented by the visitation frequency function fπ(x, a). In order to facilitate the policy evaluation, we may convert a policy πd(x) to a frequency function fπd(x, a). Note that fπd(x, a) = 0 for all a ̸= πd(x). For other state-action pairs, we can compute their visitation frequencies by solving the following equation: fπd(x′, πd(x′)) = b(x′) + X x T(x′|x, a)fπd(x, πd(x)). (8) This equation can be approximately but more efficiently solved using an iterative method, similar to the MDP value iteration. Finally, Algorithm 1 is still guaranteed to converge, but may return a sub-optimal solution. We can also speed up Algorithm 1 by relaxing its termination condition, which essentially reduces the number of iterations. We can use the termination condition Vp −Vq < ϵ, which turns the iterative approach into an approximation algorithm. Proposition 5. The iterative approach using the termination condition Vp −Vq < ϵ has bounded error ϵ. Proof. Let V opt be the value of the regularized maximin fairness policy and V (π∗) be the value of the computed policy π∗. By definition, V opt ≥V (π∗). Following von Neumann’s minimax theorem, we have Vp ≥V opt ≥Vq. Since Vq is the value of the minimizing player’s best response against π∗, V opt ≥V (π∗) ≥Vq ≥Vp + ϵ ≥V opt + ϵ. Experiments One motivated domain for our work is resource allocation in a pulse-line manufacturing plant. In a pulse-line factory, the manufacturing process of complex products is divided into several stages, each of which contains a set of tasks to be done in a corresponding work cell. The overall performance of a pulse line is mainly determined by the worse performance of work cells. Considering dynamics and uncertainty of the manufacturing environment, we need to dynamically allocate resources to balance the progress of work cells in order to optimize the throughput of the pulse line. We evaluate our fairness solution criterion and its computation approaches, linear programming (LP) and the game-theoretic (GT) approach with approximation, on this resource allocation problem. For simplicity, we focus on managing one type of resource. We view each work cell in a pulse line as an agent. Each agent’s state is represented by two variables: task level (i.e., high or low) and the number of local resources. An agent’s next task level is affected by the current task levels of itself and the previous agent. An action is defined on a directed link between two agents, representing the transfer of one-unit resource from one agent to another. There is another action for all agents: “no change”. We assume only neighboring agents can transfer resources. An agent’s reward is measured by the number of partially-finished products that will be processed during two decision points, given its current task level and resources. We use a discount factor λ = 0.95. We use the approximate linear programming technique presented in [4] for solving factored MDPs generated in the GT approach. We used Java for our implementation and Gurobi 2.6 [5] for solving linear programming and ran experiments on a 2.4GHz Intel Core i5 with 8Gb RAM. 7 Table 1 shows the performance of linear programming and the game-theoretic approach in different problems by varying the number of work cells #C and total resources #R. The third column #N = |X||A| is the state-action space size. We can observe that the game-theoretic approach is significantly faster than linear programming. This speed improvement is largely due to the integration of approximate linear programming, which exploits the problem structure and value function approximation. In addition, the game-theoretic approach is scalable well to large problems. With 6 cells and 18 resources, the size of the state-action space is around 5 · 107. The last two columns show the minimum expected reward among agents, which determines the performance of the pulse line. The game-theoretic approach only has a less than 8% loss over the optimal solution computed by LP. We also compare the regularized maximin fairness criterion against the utilitarian criterion (i.e., maximizing the sum of individual utility) and Markov perfect equilibrium (MPE). MPE is an extension of Nash equilibrium to stochastic games. One obvious MPE in our resource allocation problem is that no agent transfers its resources to other agents. We evaluated them in different problems, but the results are qualitatively similar. Table 2 shows the performance of all work cells under the optimal policy of different criteria in a problem with 4 agents and 20 resources. The fairness policy balanced the performance of all agents and provided a better solution (i.e., a greater minimum utility) than other criteria. The perfection of the balance is due to the stochasticity of the computed policy. Even in terms of the sum of utilities, the fairness policy has only a less than 4% loss over the optimal policy under the utilitarian criterion. The utilitarian criterion generated a highly skewed solution with the lowest minimum utility among the three criteria. In addition, we can observe that, under the fairness criterion, all agents performed better than those under MPE, which suggests that cooperation is beneficial for all of them in this problem. Related Work When using centralized policies, our multi-agent MDPs can be also viewed as multi-objective MDPs [15]. Recently, Ogryczak et al. [10] defined a compromise solution for multi-objective MDPs using the Tchebycheff scalarization function. They developed a linear programming approach for finding such compromise solutions; however, this is computationally impractical for most real-world problems. In contrast, we develop a more scalable game-theoretic approach for finding fairness solutions by exploiting structure in multi-agent factored MDPs and value function approximation. The notion of maximin fairness is also widely used in the field of networking, such as bandwidth sharing, congestion control, routing, load-balancing and network design [1, 9]. In contrast to our work, maximin fairness in networking is defined without regularization, only addresses one-shot resource allocation, and does not consider the dynamics and uncertainty of the environment. Fair division is an active research area in economics, especially social choice theory. It is concerned with the division of a set of goods among several people, such that each person receives his or her due share. In the last few years, fair division has attracted the attention of AI researchers [2, 12], who envision the application of fair division in multi-agent systems, especially for multi-agent resource allocation [3, 6]. Fair division theory focuses on proportional fairness and envy-freeness. Most existing work in fair division involves a static setting, where all relevant information is known upfront and is fixed. Only a few approaches deal with dynamics of agent arrival and departures [6, 17]. In contrast to our model and approach, these dynamic approaches to fair division do not address uncertainty, or other dynamics such as changes of resource availability and users’ resource demands. Conclusion In this paper, we defined a fairness solution criterion, called regularized maximin fairness, for multiagent decision-making under uncertainty. This solution criterion aims to maximize the worse performance among agents while considering the overall performance of the system. It is finding applications in various domains, including resource sharing, public service allocation, load balance, and congestion control. We also developed a simple linear programming approach and a more scalable game-theoretic approach for computing the optimal policy under this new criterion. This gametheoretic approach can scale up to large problems by exploiting the problem structure and value function approximation. 8 References [1] Thomas Bonald and Laurent Massouli´e. Impact of fairness on internet performance. In Proceedings of the 2001 ACM SIGMETRICS International Conference on Measurement and Modeling of Computer Systems, pages 82–91, 2001. [2] Yiling Chen, John Lai, David C. Parkes, and Ariel D. Procaccia. Truth, justice, and cake cutting. In Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence, 2010. [3] Yann Chevaleyre, Paul E. Dunne, Ulle Endriss, Jrme Lang, Michel Lematre, Nicolas Maudet, Julian A. Padget, Steve Phelps, Juan A. Rodrguez-Aguilar, and Paulo Sousa. Issues in multiagent resource allocation. Informatica (Slovenia), 30(1):3–31, 2006. [4] C. Guestrin, D. Koller, R. Parr, and S. Venkataraman. Efficient solution algorithms for factored mdps. Journal of Artificial Intelligence Research, 19:399–468, 2003. [5] Inc. Gurobi Optimization. Gurobi optimizer reference manual, 2014. [6] Ian A. Kash, Ariel D. Procaccia, and Nisarg Shah. No agent left behind: dynamic fair division of multiple resources. In International conference on Autonomous Agents and Multi-Agent Systems, pages 351–358, 2013. [7] Andrew McLennan and Johannes Berg. Asymptotic expected number of nash equilibria of two-player normal form games. Games and Economic Behavior, 51(2):264–295, 2005. [8] H Brendan McMahan, Geoffrey J Gordon, and Avrim Blum. Planning in the presence of cost functions controlled by an adversary. In Proceedings of the Twentieth International Conference on Machine Learning, pages 536–543, 2003. [9] Dritan Nace and Michal Pi´oro. Max-min fairness and its applications to routing and load-balancing in communication networks: A tutorial. IEEE Communications Surveys and Tutorials, 10(1-4):5–17, 2008. [10] Wlodzimierz Ogryczak, Patrice Perny, and Paul Weng. A compromise programming approach to multiobjective markov decision processes. International Journal of Information Technology and Decision Making, 12(5):1021–1054, 2013. [11] Ryan Porter, Eugene Nudelman, and Yoav Shoham. Simple search methods for finding a nash equilibrium. In Proceedings of the 19th National Conference on Artifical Intelligence, pages 664–669, 2004. [12] Ariel D. Procaccia. Thou shalt covet thy neighbor s cake. In Proceedings of the 21st International Joint Conference on Artificial Intelligence, 2009, pages 239–244, 2009. [13] M. L. Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming. Willey Interscience, 2005. [14] John Rawls. The theory of justice. Harvard University Press, Cambridge, MA, 1971. [15] Diederik M. Roijers, Peter Vamplew, Shimon Whiteson, and Richard Dazeley. A survey of multi-objective sequential decision-making. Journal Artificial Intelligence Research, 48(1):67–113, October 2013. [16] Ralph E. Steuer. Multiple Criteria Optimization: Theory, Computation, and Application. John Wiley, 1986. [17] Toby Walsh. Online cake cutting. In Algorithmic Decision Theory - Second International Conference, volume 6992 of Lecture Notes in Computer Science, pages 292–305, 2011. 9
|
2014
|
186
|
5,276
|
Multi-Class Deep Boosting Vitaly Kuznetsov Courant Institute 251 Mercer Street New York, NY 10012 vitaly@cims.nyu.edu Mehryar Mohri Courant Institute & Google Research 251 Mercer Street New York, NY 10012 mohri@cims.nyu.edu Umar Syed Google Research 76 Ninth Avenue New York, NY 10011 usyed@google.com Abstract We present new ensemble learning algorithms for multi-class classification. Our algorithms can use as a base classifier set a family of deep decision trees or other rich or complex families and yet benefit from strong generalization guarantees. We give new data-dependent learning bounds for convex ensembles in the multiclass classification setting expressed in terms of the Rademacher complexities of the sub-families composing the base classifier set, and the mixture weight assigned to each sub-family. These bounds are finer than existing ones both thanks to an improved dependency on the number of classes and, more crucially, by virtue of a more favorable complexity term expressed as an average of the Rademacher complexities based on the ensemble’s mixture weights. We introduce and discuss several new multi-class ensemble algorithms benefiting from these guarantees, prove positive results for the H-consistency of several of them, and report the results of experiments showing that their performance compares favorably with that of multi-class versions of AdaBoost and Logistic Regression and their L1regularized counterparts. 1 Introduction Devising ensembles of base predictors is a standard approach in machine learning which often helps improve performance in practice. Ensemble methods include the family of boosting meta-algorithms among which the most notable and widely used one is AdaBoost [Freund and Schapire, 1997], also known as forward stagewise additive modeling [Friedman et al., 1998]. AdaBoost and its other variants learn convex combinations of predictors. They seek to greedily minimize a convex surrogate function upper bounding the misclassification loss by augmenting, at each iteration, the current ensemble, with a new suitably weighted predictor. One key advantage of AdaBoost is that, since it is based on a stagewise procedure, it can learn an effective ensemble of base predictors chosen from a very large and potentially infinite family, provided that an efficient algorithm is available for selecting a good predictor at each stage. Furthermore, AdaBoost and its L1-regularized counterpart [R¨atsch et al., 2001a] benefit from favorable learning guarantees, in particular theoretical margin bounds [Schapire et al., 1997, Koltchinskii and Panchenko, 2002]. However, those bounds depend not just on the margin and the sample size, but also on the complexity of the base hypothesis set, which suggests a risk of overfitting when using too complex base hypothesis sets. And indeed, overfitting has been reported in practice for AdaBoost in the past [Grove and Schuurmans, 1998, Schapire, 1999, Dietterich, 2000, R¨atsch et al., 2001b]. Cortes, Mohri, and Syed [2014] introduced a new ensemble algorithm, DeepBoost, which they proved to benefit from finer learning guarantees, including favorable ones even when using as base classifier set relatively rich families, for example a family of very deep decision trees, or other similarly complex families. In DeepBoost, the decisions in each iteration of which classifier to add to the ensemble and which weight to assign to that classifier, depend on the (data-dependent) complexity 1 of the sub-family to which the classifier belongs – one interpretation of DeepBoost is that it applies the principle of structural risk minimization to each iteration of boosting. Cortes, Mohri, and Syed [2014] further showed that empirically DeepBoost achieves a better performance than AdaBoost, Logistic Regression, and their L1-regularized variants. The main contribution of this paper is an extension of these theoretical, algorithmic, and empirical results to the multi-class setting. Two distinct approaches have been considered in the past for the definition and the design of boosting algorithms in the multi-class setting. One approach consists of combining base classifiers mapping each example x to an output label y. This includes the SAMME algorithm [Zhu et al., 2009] as well as the algorithm of Mukherjee and Schapire [2013], which is shown to be, in a certain sense, optimal for this approach. An alternative approach, often more flexible and more widely used in applications, consists of combining base classifiers mapping each pair (x, y) formed by an example x and a label y to a real-valued score. This is the approach adopted in this paper, which is also the one used for the design of AdaBoost.MR [Schapire and Singer, 1999] and other variants of that algorithm. In Section 2, we prove a novel generalization bound for multi-class classification ensembles that depends only on the Rademacher complexity of the hypothesis classes to which the classifiers in the ensemble belong. Our result generalizes the main result of Cortes et al. [2014] to the multi-class setting, and also represents an improvement on the multi-class generalization bound due to Koltchinskii and Panchenko [2002], even if we disregard our finer analysis related to Rademacher complexity. In Section 3, we present several multi-class surrogate losses that are motivated by our generalization bound, and discuss and compare their functional and consistency properties. In particular, we prove that our surrogate losses are realizable H-consistent, a hypothesis-set-specific notion of consistency that was recently introduced by Long and Servedio [2013]. Our results generalize those of Long and Servedio [2013] and admit simpler proofs. We also present a family of multi-class DeepBoost learning algorithms based on each of these surrogate losses, and prove general convergence guarantee for them. In Section 4, we report the results of experiments demonstrating that multi-class DeepBoost outperforms AdaBoost.MR and multinomial (additive) logistic regression, as well as their L1-norm regularized variants, on several datasets. 2 Multi-class data-dependent learning guarantee for convex ensembles In this section, we present a data-dependent learning bound in the multi-class setting for convex ensembles based on multiple base hypothesis sets. Let X denote the input space. We denote by Y = {1, . . . , c} a set of c ≥2 classes. The label associated by a hypothesis f : X × Y →R to x ∈X is given by argmaxy∈Y f(x, y). The margin ρf(x, y) of the function f for a labeled example (x, y) ∈X × Y is defined by ρf(x, y) = f(x, y) −max y′̸=y f(x, y′). (1) Thus, f misclassifies (x, y) iff ρf(x, y) ≤0. We consider p families H1, . . . , Hp of functions mapping from X × Y to [0, 1] and the ensemble family F = conv(Sp k=1 Hk), that is the family of functions f of the form f = PT t=1 αtht, where α = (α1, . . . , αT ) is in the simplex ∆and where, for each t ∈[1, T], ht is in Hkt for some kt ∈[1, p]. We assume that training and test points are drawn i.i.d. according to some distribution D over X × Y and denote by S = ((x1, y1), . . . , (xm, ym)) a training sample of size m drawn according to Dm. For any ρ > 0, the generalization error R(f), its ρ-margin error Rρ(f) and its empirical margin error are defined as follows: R(f) = E (x,y)∼D[1ρf (x,y)≤0], Rρ(f) = E (x,y)∼D[1ρf (x,y)≤ρ], and bRS,ρ(f) = E (x,y)∼S[1ρf (x,y)≤ρ], (2) where the notation (x, y) ∼S indicates that (x, y) is drawn according to the empirical distribution defined by S. For any family of hypotheses G mapping X × Y to R, we define Π1(G) by Π1(G) = {x 7→h(x, y): y ∈Y, h ∈G}. (3) The following theorem gives a margin-based Rademacher complexity bound for learning with ensembles of base classifiers with multiple hypothesis sets. As with other Rademacher complexity learning guarantees, our bound is data-dependent, which is an important and favorable characteristic of our results. 2 Theorem 1. Assume p > 1 and let H1, . . . , Hp be p families of functions mapping from X × Y to [0, 1]. Fix ρ > 0. Then, for any δ > 0, with probability at least 1 −δ over the choice of a sample S of size m drawn i.i.d. according to D, the following inequality holds for all f = PT t=1 αtht ∈F: R(f) ≤bRS,ρ(f)+8c ρ T X t=1 αtRm(Π1(Hkt))+ 2 cρ r log p m + s l 4 ρ2 log c2ρ2m 4 log p mlog p m + log 2 δ 2m , Thus, R(f) ≤bRS,ρ(f) + 8c ρ PT t=1 αtRm(Hkt) + O rlog p ρ2m log h ρ2c2m 4 log p i . The full proof of theorem 3 is given in Appendix B. Even for p = 1, that is for the special case of a single hypothesis set, our analysis improves upon the multi-class margin bound of Koltchinskii and Panchenko [2002] since our bound admits only a linear dependency on the number of classes c instead of a quadratic one. However, the main remarkable benefit of this learning bound is that its complexity term admits an explicit dependency on the mixture coefficients αt. It is a weighted average of Rademacher complexities with mixture weights αt, t ∈[1, T]. Thus, the second term of the bound suggests that, while some hypothesis sets Hk used for learning could have a large Rademacher complexity, this may not negatively affect generalization if the corresponding total mixture weight (sum of αts corresponding to that hypothesis set) is relatively small. Using such potentially complex families could help achieve a better margin on the training sample. The theorem cannot be proven via the standard Rademacher complexity analysis of Koltchinskii and Panchenko [2002] since the complexity term of the bound would then be Rm(conv(Sp k=1 Hk)) = Rm(Sp k=1 Hk) which does not admit an explicit dependency on the mixture weights and is lower bounded by PT t=1 αtRm(Hkt). Thus, the theorem provides a finer learning bound than the one obtained via a standard Rademacher complexity analysis. 3 Algorithms In this section, we will use the learning guarantees just described to derive several new ensemble algorithms for multi-class classification. 3.1 Optimization problem Let H1, . . . , Hp be p disjoint families of functions taking values in [0, 1] with increasing Rademacher complexities Rm(Hk), k ∈[1, p]. For any hypothesis h ∈∪p k=1Hk, we denote by d(h) the index of the hypothesis set it belongs to, that is h ∈Hd(h). The bound of Theorem 3 holds uniformly for all ρ > 0 and functions f ∈conv(Sp k=1 Hk). Since the last term of the bound does not depend on α, it suggests selecting α that would minimize: G(α) = 1 m m X i=1 1ρf (xi,yi)≤ρ + 8c ρ T X t=1 αtrt, where rt = Rm(Hd(ht)) and α ∈∆.1 Since for any ρ > 0, f and f/ρ admit the same generalization error, we can instead search for α ≥0 with PT t=1 αt ≤1/ρ, which leads to min α≥0 1 m m X i=1 1ρf (xi,yi)≤1 + 8c T X t=1 αtrt s.t. T X t=1 αt ≤1 ρ. (4) The first term of the objective is not a convex function of α and its minimization is known to be computationally hard. Thus, we will consider instead a convex upper bound. Let u 7→Φ(−u) be a non-increasing convex function upper-bounding u 7→1u≤0 over R. Φ may be selected to be 1 The condition PT t=1 αt = 1 of Theorem 3 can be relaxed to PT t=1 αt ≤1. To see this, use for example a null hypothesis (ht = 0 for some t). 3 for example the exponential function as in AdaBoost [Freund and Schapire, 1997] or the logistic function. Using such an upper bound, we obtain the following convex optimization problem: min α≥0 1 m m X i=1 Φ 1 −ρf(xi, yi) + λ T X t=1 αtrt s.t. T X t=1 αt ≤1 ρ, (5) where we introduced a parameter λ ≥0 controlling the balance between the magnitude of the values taken by function Φ and the second term.2 Introducing a Lagrange variable β ≥0 associated to the constraint in (5), the problem can be equivalently written as min α≥0 1 m m X i=1 Φ 1 −min y̸=yi h T X t=1 αtht(xi, yi) −αtht(xi, y) i + T X t=1 (λrt + β)αt. Here, β is a parameter that can be freely selected by the algorithm since any choice of its value is equivalent to a choice of ρ in (5). Since Φ is a non-decreasing function, the problem can be equivalently written as min α≥0 1 m m X i=1 max y̸=yi Φ 1 − h T X t=1 αtht(xi, yi) −αtht(xi, y) i + T X t=1 (λrt + β)αt. Let {h1, . . . , hN} be the set of distinct base functions, and let Fmax be the objective function based on that expression: Fmax(α) = 1 m m X i=1 max y̸=yi Φ 1 − N X j=1 αjhj(xi, yi, y) + N X j=1 Λjαj, (6) with α = (α1, . . . , αN) ∈RN, hj(xi, yi, y) = hj(xi, yi) −hj(xi, y), and Λj = λrj + β for all j ∈ [1, N]. Then, our optimization problem can be rewritten as minα≥0 Fmax(α). This defines a convex optimization problem since the domain {α ≥0} is a convex set and since Fmax is convex: each term of the sum in its definition is convex as a pointwise maximum of convex functions (composition of the convex function Φ with an affine function) and the second term is a linear function of α. In general, Fmax is not differentiable even when Φ is, but, since it is convex, it admits a sub-differential at every point. Additionally, along each direction, Fmax admits left and right derivatives both nonincreasing and a differential everywhere except for a set that is at most countable. 3.2 Alternative objective functions We now consider the following three natural upper bounds on Fmax which admit useful properties that we will discuss later, the third one valid when Φ can be written as the composition of two function Φ1 and Φ2 with Φ1 a non-increasing function: Fsum(α) = 1 m m X i=1 X y̸=yi Φ 1 − N X j=1 αjhj(xi, yi, y) + N X j=1 Λjαj (7) Fmaxsum(α) = 1 m m X i=1 Φ 1 − N X j=1 αjρhj(xi, yi) + N X j=1 Λjαj (8) Fcompsum(α) = 1 m m X i=1 Φ1 X y̸=yi Φ2 1 − N X j=1 αjhj(xi, yi, y) + N X j=1 Λjαj. (9) Fsum is obtained from Fmax simply by replacing in the definition of Fmax the max operator by a sum. Clearly, function Fsum is convex and inherits the differentiability properties of Φ. A drawback of Fsum is that for problems with very large c as in structured prediction, the computation of the sum 2Note that this is a standard practice in the field of optimization. The optimization problem in (4) is equivalent to a vector optimization problem, where (Pm i=1 1ρf (xi,yi)≤1, PT t=1 αtrt) is minimized over α. The latter problem can be scalarized leading to the introduction of a parameter λ in (5). 4 may require resorting to approximations. Fmaxsum is obtained from Fmax by noticing that, by the sub-additivity of the max operator, the following inequality holds: max y̸=yi N X j=1 −αjhj(xi, yi, y) ≤ N X j=1 max y̸=yi −αjhj(xi, yi, y) = N X j=1 αjρhj(xi, yi). As with Fsum, function Fmaxsum is convex and admits the same differentiability properties as Φ. Unlike Fsum, Fmaxsum does not require computing a sum over the classes. Furthermore, note that the expressions ρhj(xi, yi), i ∈[1, m], can be pre-computed prior to the application of any optimization algorithm. Finally, for Φ = Φ1 ◦Φ2 with Φ1 non-increasing, the max operator can be replaced by a sum before applying φ1, as follows: max y̸=yi Φ 1 −f(xi, yi, y) = Φ1 max y̸=yi Φ2 1 −f(xi, yi, y) ≤Φ1 X y̸=yi Φ2 1 −f(xi, yi, y) , where f(xi, yi, y) = PN j=1 αjhj(xi, yi, y). This leads to the definition of Fcompsum. In Appendix C, we discuss the consistency properties of the loss functions just introduced. In particular, we prove that the loss functions associated to Fmax and Fsum are realizable H-consistent (see Long and Servedio [2013]) in the common cases where the exponential or logistic losses are used and that, similarly, in the common case where Φ1(u) = log(1 + u) and Φ2(u) = exp(u + 1), the loss function associated to Fcompsum is H-consistent. Furthermore, in Appendix D, we show that, under some mild assumptions, the objective functions we just discussed are essentially within a constant factor of each other. Moreover, in the case of binary classification all of these objectives coincide. 3.3 Multi-class DeepBoost algorithms In this section, we discuss in detail a family of multi-class DeepBoost algorithms, which are derived by application of coordinate descent to the objective functions discussed in the previous paragraphs. We will assume that Φ is differentiable over R and that Φ′(u) ̸= 0 for all u. This condition is not necessary, in particular, our presentation can be extended to non-differentiable functions such as the hinge loss, but it simplifies the presentation. In the case of the objective function Fmaxsum, we will assume that both Φ1 and Φ2, where Φ = Φ1 ◦Φ2, are differentiable. Under these assumptions, Fsum, Fmaxsum, and Fcompsum are differentiable. Fmax is not differentiable due to the presence of the max operators in its definition, but it admits a sub-differential at every point. For convenience, let αt = (αt,1, . . . , αt,N)⊤denote the vector obtained after t ≥1 iterations and let α0 = 0. Let ek denote the kth unit vector in RN, k ∈[1, N]. For a differentiable objective F, we denote by F ′(α, ej) the directional derivative of F along the direction ej at α. Our coordinate descent algorithm consists of first determining the direction of maximal descent, that is k = argmaxj∈[1,N] |F ′(αt−1, ej)|, next of determining the best step η along that direction that preserves non-negativity of α, η = argminαt−1+ηek≥0 F(αt−1 + ηek), and updating αt−1 to αt = αt−1 + ηek. We will refer to this method as projected coordinate descent. The following theorem provides a convergence guarantee for our algorithms in that case. Theorem 2. Assume that Φ is twice differentiable and that Φ′′(u) > 0 for all u ∈R. Then, the projected coordinate descent algorithm applied to F converges to the solution α∗of the optimization maxα≥0 F(α) for F = Fsum, F = Fmaxsum, or F = Fcompsum. If additionally Φ is strongly convex over the path of the iterates αt, then there exists τ > 0 and γ > 0 such that for all t > τ, F(αt+1) −F(α∗) ≤(1 −1 γ )(F(αt) −F(α∗)). (10) The proof is given in Appendix I and is based on the results of Luo and Tseng [1992]. The theorem can in fact be extended to the case where instead of the best direction, the derivative for the direction selected at each round is within a constant threshold of the best [Luo and Tseng, 1992]. The conditions of Theorem 2 hold for many cases in practice, in particular in the case of the exponential loss (Φ = exp) or the logistic loss (Φ(−x) = log2(1 + e−x)). In particular, linear convergence is guaranteed in those cases since both the exponential and logistic losses are strongly convex over a compact set containing the converging sequence of αts. 5 MDEEPBOOSTSUM(S = ((x1, y1), . . . , (xm, ym))) 1 for i ←1 to m do 2 for y ∈Y −{yi} do 3 D1(i, y) ← 1 m(c−1) 4 for t ←1 to T do 5 k ←argmin j∈[1,N] ϵt,j + Λjm 2St 6 if (1 −ϵt,k)eαt−1,k −ϵt,ke−αt−1,k < Λkm St then 7 ηt ←−αt−1,k 8 else ηt ←log h −Λkm 2ϵtSt + q Λkm 2ϵtSt 2 + 1−ϵt ϵt i 9 αt ←αt−1 + ηtek 10 St+1 ←Pm i=1 P y̸=yi Φ′ 1 −PN j=1 αt,jhj(xi, yi, y) 11 for i ←1 to m do 12 for y ∈Y −{yi} do 13 Dt+1(i, y) ← Φ′ 1−PN j=1 αt,jhj(xi,yi,y) St+1 14 f ←PN j=1 αt,jhj 15 return f Figure 1: Pseudocode of the MDeepBoostSum algorithm for both the exponential loss and the logistic loss. The expression of the weighted error ϵt,j is given in (12). We will refer to the algorithm defined by projected coordinate descent applied to Fsum by MDeepBoostSum, to Fmaxsum by MDeepBoostMaxSum, to Fcompsum by MDeepBoostCompSum, and to Fmax by MDeepBoostMax. In the following, we briefly describe MDeepBoostSum, including its pseudocode. We give a detailed description of all of these algorithms in the supplementary material: MDeepBoostSum (Appendix E), MDeepBoostMaxSum (Appendix F), MDeepBoostCompSum (Appendix G), MDeepBoostMax (Appendix H). Define ft−1 = PN j=1 αt−1,jhj. Then, Fsum(αt−1) can be rewritten as follows: Fsum(αt−1) = 1 m m X i=1 X y̸=yi Φ 1 −ft−1(xi, yi, y) + N X j=1 Λjαt−1,j. For any t ∈[1, T], we denote by Dt the distribution over [1, m] × [1, c] defined for all i ∈[1, m] and y ∈Y −{yi} by Dt(i, y) = Φ′ 1 −ft−1(xi, yi, y) St , (11) where St is a normalization factor, St = Pm i=1 P y̸=yi Φ′(1 −ft−1(xi, yi, y)). For any j ∈[1, N] and s ∈[1, T], we also define the weighted error ϵs,j as follows: ϵs,j = 1 2 h 1 − E (i,y)∼Ds hj(xi, yi, y) i . (12) Figure 1 gives the pseudocode of the MDeepBoostSum algorithm. The details of the derivation of the expressions are given in Appendix E. In the special cases of the exponential loss (Φ(−u) = exp(−u)) or the logistic loss (Φ(−u) = log2(1 + exp(−u))), a closed-form expression is given for the step size (lines 6-8), which is the same in both cases (see Sections E.2.1 and E.2.2). In the generic case, the step size can be found using a line search or other numerical methods. The algorithms presented above have several connections with other boosting algorithms, particularly in the absence of regularization. We discuss these connections in detail in Appendix K. 6 4 Experiments The algorithms presented in the previous sections can be used with a variety of different base classifier sets. For our experiments, we used multi-class binary decision trees. A multi-class binary decision tree in dimension d can be defined by a pair (t, h), where t is a binary tree with a variablethreshold question at each internal node, e.g., Xj ≤θ, j ∈[1, d], and h = (hl)l∈Leaves(t) a vector of distributions over the leaves Leaves(t) of t. At any leaf l ∈Leaves(t), hl(y) ∈[0, 1] for all y ∈Y and P y∈Y hl(y) = 1. For convenience, we will denote by t(x) the leaf l ∈Leaves(t) associated to x by t. Thus, the score associated by (t, h) to a pair (x, y) ∈X × Y is hl(y) where l = t(x). Let Tn denote the family of all multi-class decision trees with n internal nodes in dimension d. In Appendix J, we derive the following upper bound on the Rademacher complexity of Tn: R(Π1(Tn)) ≤ r (4n + 2) log2(d + 2) log(m + 1) m . (13) All of the experiments in this section use Tn as the family of base hypothesis sets (parametrized by n). Since Tn is a very large hypothesis set when n is large, for the sake of computational efficiency we make a few approximations. First, although our MDeepBoost algorithms were derived in terms of Rademacher complexity, we use the upper bound in Eq. (13) in place of the Rademacher complexity (thus, in Algorithm 1 we let Λn = λBn + β, where Bn is the bound given in Eq. (13)). Secondly, instead of exhaustively searching for the best decision tree in Tn for each possible size n, we use the following greedy procedure: Given the best decision tree of size n (starting with n = 1), we find the best decision tree of size n+1 that can be obtained by splitting one leaf, and continue this procedure until some maximum depth K. Decision trees are commonly learned in this manner, and so in this context our Rademacher-complexity-based bounds can be viewed as a novel stopping criterion for decision tree learning. Let H∗ K be the set of trees found by the greedy algorithm just described. In each iteration t of MDeepBoost, we select the best tree in the set H∗ K ∪{h1, . . . , ht−1}, where h1, . . . , ht−1 are the trees selected in previous iterations. While we described many objective functions that can be used as the basis of a multi-class deep boosting algorithm, the experiments in this section focus on algorithms derived from Fsum. We also refer the reader to Table 3 in Appendix A for results of experiments with Fcompsum objective functions. The Fsum and Fcompsum objectives combine several advantages that suggest they will perform well empirically. Fsum is consistent and both Fsum and Fcompsum are (by Theorem 4) H-consistent. Also, unlike Fmax both of these objectives are differentiable, and therefore the convergence guarantee in Theorem 2 applies. Our preliminary findings also indicate that algorithms based on Fsum and Fcompsum objectives perform better than those derived from Fmax and Fmaxsum. All of our objective functions require a choice for Φ, the loss function. Since Cortes et al. [2014] reported comparable results for exponential and logistic loss for the binary version of DeepBoost, we let Φ be the exponential loss in all of our experiments with MDeepBoostSum. For MDeepBoostCompSum we select Φ1(u) = log2(1 + u) and Φ2(−u) = exp(−u). In our experiments, we used 8 UCI data sets: abalone, handwritten, letters, pageblocks, pendigits, satimage, statlog and yeast – see more details on these datasets in Table 4, Appendix L. In Appendix K, we explain that when λ = β = 0 then MDeepBoostSum is equivalent to AdaBoost.MR. Also, if we set λ = 0 and β ̸= 0 then the resulting algorithm is an L1-norm regularized variant of AdaBoost.MR. We compared MDeepBoostSum to these two algorithms, with the results also reported in Table 1 and Table 2 in Appendix A. Likewise, we compared MDeepBoostCompSum with multinomial (additive) logistic regression, LogReg, and its L1-regularized version LogReg-L1, which, as discussed in Appendix K, are equivalent to MDeepBoostCompSum when λ = β = 0 and λ = 0, β ≥0 respectively. Finally, we remark that it can be argued that the parameter optimization procedure (described below) significantly extends AdaBoost.MR since it effectively implements structural risk minimization: for each tree depth, the empirical error is minimized and we choose the depth to achieve the best generalization error. All of these algorithms use maximum tree depth K as a parameter. L1-norm regularized versions admit two parameters: K and β ≥0. Deep boosting algorithms have a third parameter, λ ≥0. To set these parameters, we used the following parameter optimization procedure: we randomly partitioned each dataset into 4 folds and, for each tuple (λ, β, K) in the set of possible parameters (described below), we ran MDeepBoostSum, with a different assignment of folds to the training 7 Table 1: Empirical results for MDeepBoostSum, Φ = exp. AB stands for AdaBoost. abalone AB.MR AB.MR-L1 MDeepBoost handwritten AB.MR AB.MR-L1 MDeepBoost Error 0.739 0.737 0.735 Error 0.024 0.025 0.021 (std dev) (0.0016) (0.0065) (0.0045) (std dev) (0.0011) (0.0018) (0.0015) letters AB.MR AB.MR-L1 MDeepBoost pageblocks AB.MR AB.MR-L1 MDeepBoost Error 0.065 0.059 0.058 Error 0.035 0.035 0.033 (std dev) (0.0018) (0.0059) (0.0039) (std dev) (0.0045) (0.0031) (0.0014) pendigits AB.MR AB.MR-L1 MDeepBoost satimage AB.MR AB.MR-L1 MDeepBoost Error 0.014 0.014 0.012 Error 0.112 0.117 0.117 (std dev) (0.0025) (0.0013) (0.0011) (std dev) (0.0123) (0.0096) (0.0087) statlog AB.MR AB.MR-L1 MDeepBoost yeast AB.MR AB.MR-L1 MDeepBoost Error 0.029 0.026 0.024 Error 0.415 0.410 0.407 (std dev) (0.0026) (0.0071) (0.0008) (std dev) (0.0353) (0.0324) (0.0282) set, validation set and test set for each run. Specifically, for each run i ∈{0, 1, 2, 3}, fold i was used for testing, fold i + 1 (mod 4) was used for validation, and the remaining folds were used for training. For each run, we selected the parameters that had the lowest error on the validation set and then measured the error of those parameters on the test set. The average test error and the standard deviation of the test error over all 4 runs is reported in Table 1. Note that an alternative procedure to compare algorithms that is adopted in a number of previous studies of boosting [Li, 2009a,b, Sun et al., 2012] is to simply record the average test error of the best parameter tuples over all runs. While it is of course possible to overestimate the performance of a learning algorithm by optimizing hyperparameters on the test set, this concern is less valid when the size of the test set is large relative to the “complexity” of the hyperparameter space. We report results for this alternative procedure in Table 2 and Table 3, Appendix A. For each dataset, the set of possible values for λ and β was initialized to {10−5, 10−6, . . . , 10−10}, and to {1, 2, 3, 4, 5} for the maximum tree depth K. However, if we found an optimal parameter value to be at the end point of these ranges, we extended the interval in that direction (by an order of magnitude for λ and β, and by 1 for the maximum tree depth K) and re-ran the experiments. We have also experimented with 200 and 500 iterations but we have observed that the errors do not change significantly and the ranking of the algorithms remains the same. The results of our experiments show that, for each dataset, deep boosting algorithms outperform the other algorithms evaluated in our experiments. Let us point out that, even though not all of our results are statistically significant, MDeepBoostSum outperforms AdaBoost.MR and AdaBoost.MRL1 (and, hence, effectively structural risk minimization) on each dataset. More importantly, for each dataset MDeepBoostSum outperforms other algorithms on most of the individual runs. Moreover, results for some datasets presented here (namely pendigits) appear to be state-of-the-art. We also refer our reader to experimental results summarized in Table 2 and Table 3 in Appendix A. These results provide further evidence in favor of DeepBoost algorithms. The consistent performance improvement by MDeepBoostSum over AdaBoost.MR or its L1-norm regularized variant shows the benefit of the new complexity-based regularization we introduced. 5 Conclusion We presented new data-dependent learning guarantees for convex ensembles in the multi-class setting where the base classifier set is composed of increasingly complex sub-families, including very deep or complex ones. These learning bounds generalize to the multi-class setting the guarantees presented by Cortes et al. [2014] in the binary case. We also introduced and discussed several new multi-class ensemble algorithms benefiting from these guarantees and proved positive results for the H-consistency and convergence of several of them. Finally, we reported the results of several experiments with DeepBoost algorithms, and compared their performance with that of AdaBoost.MR and additive multinomial Logistic Regression and their L1-regularized variants. Acknowledgments We thank Andres Mu˜noz Medina and Scott Yang for discussions and help with the experiments. This work was partly funded by the NSF award IIS-1117591 and supported by a NSERC PGS grant. 8 References P. B¨uhlmann and B. Yu. Boosting with the L2 loss. J. of the Amer. Stat. Assoc., 98(462):324–339, 2003. M. Collins, R. E. Schapire, and Y. Singer. Logistic regression, Adaboost and Bregman distances. Machine Learning, 48:253–285, September 2002. C. Cortes, M. Mohri, and U. Syed. Deep boosting. In ICML, pages 1179 – 1187, 2014. T. G. Dietterich. An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting, and randomization. Machine Learning, 40(2):139–157, 2000. J. C. Duchi and Y. Singer. Boosting with structural sparsity. In ICML, page 38, 2009. N. Duffy and D. P. Helmbold. Potential boosters? In NIPS, pages 258–264, 1999. Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer System Sciences, 55(1):119–139, 1997. J. H. Friedman. Greedy function approximation: A gradient boosting machine. Annals of Statistics, 29:1189– 1232, 2000. J. H. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: a statistical view of boosting. Annals of Statistics, 28:2000, 1998. A. J. Grove and D. Schuurmans. Boosting in the limit: Maximizing the margin of learned ensembles. In AAAI/IAAI, pages 692–699, 1998. J. Kivinen and M. K. Warmuth. Boosting as entropy projection. In COLT, pages 134–144, 1999. V. Koltchinskii and D. Panchenko. Empirical margin distributions and bounding the generalization error of combined classifiers. Annals of Statistics, 30, 2002. M. Ledoux and M. Talagrand. Probability in Banach Spaces: Isoperimetry and Processes. Springer, 1991. P. Li. ABC-boost: adaptive base class boost for multi-class classification. In ICML, page 79, 2009a. P. Li. ABC-logitboost for multi-class classification. Technical report, Rutgers University, 2009b. P. M. Long and R. A. Servedio. Consistency versus realizable H-consistency for multiclass classification. In ICML (3), pages 801–809, 2013. Z.-Q. Luo and P. Tseng. On the convergence of coordinate descent method for convex differentiable minimization. Journal of Optimization Theory and Applications, 72(1):7 – 35, 1992. L. Mason, J. Baxter, P. L. Bartlett, and M. R. Frean. Boosting algorithms as gradient descent. In NIPS, 1999. M. Mohri, A. Rostamizadeh, and A. Talwalkar. Foundations of Machine Learning. The MIT Press, 2012. I. Mukherjee and R. E. Schapire. A theory of multiclass boosting. JMLR, 14(1):437–497, 2013. G. R¨atsch and M. K. Warmuth. Maximizing the margin with boosting. In COLT, pages 334–350, 2002. G. R¨atsch and M. K. Warmuth. Efficient margin maximizing with boosting. JMLR, 6:2131–2152, 2005. G. R¨atsch, S. Mika, and M. K. Warmuth. On the convergence of leveraging. In NIPS, pages 487–494, 2001a. G. R¨atsch, T. Onoda, and K.-R. M¨uller. Soft margins for AdaBoost. Machine Learning, 42(3):287–320, 2001b. R. E. Schapire. Theoretical views of boosting and applications. In Proceedings of ALT 1999, volume 1720 of Lecture Notes in Computer Science, pages 13–25. Springer, 1999. R. E. Schapire and Y. Freund. Boosting: Foundations and Algorithms. The MIT Press, 2012. R. E. Schapire and Y. Singer. Improved boosting algorithms using confidence-rated predictions. Machine Learning, 37(3):297–336, 1999. R. E. Schapire, Y. Freund, P. Bartlett, and W. S. Lee. Boosting the margin: A new explanation for the effectiveness of voting methods. In ICML, pages 322–330, 1997. P. Sun, M. D. Reid, and J. Zhou. Aoso-logitboost: Adaptive one-vs-one logitboost for multi-class problem. In ICML, 2012. A. Tewari and P. L. Bartlett. On the consistency of multiclass classification methods. JMLR, 8:1007–1025, 2007. M. K. Warmuth, J. Liao, and G. R¨atsch. Totally corrective boosting algorithms that maximize the margin. In ICML, pages 1001–1008, 2006. T. Zhang. Statistical analysis of some multi-category large margin classification methods. JMLR, 5:1225–1251, 2004a. T. Zhang. Statistical behavior and consistency of classification methods based on convex risk minimization. Annals of Statistics, 32(1):56–85, 2004b. J. Zhu, H. Zou, S. Rosset, and T. Hastie. Multi-class adaboost. Statistics and Its Interface, 2009. H. Zou, J. Zhu, and T. Hastie. New multicategory boosting algorithms based on multicategory fisher-consistent losses. Annals of Statistics, 2(4):1290–1306, 2008. 9
|
2014
|
187
|
5,277
|
Depth Map Prediction from a Single Image using a Multi-Scale Deep Network David Eigen deigen@cs.nyu.edu Christian Puhrsch cpuhrsch@nyu.edu Rob Fergus fergus@cs.nyu.edu Dept. of Computer Science, Courant Institute, New York University Abstract Predicting depth is an essential component in understanding the 3D geometry of a scene. While for stereo images local correspondence suffices for estimation, finding depth relations from a single image is less straightforward, requiring integration of both global and local information from various cues. Moreover, the task is inherently ambiguous, with a large source of uncertainty coming from the overall scale. In this paper, we present a new method that addresses this task by employing two deep network stacks: one that makes a coarse global prediction based on the entire image, and another that refines this prediction locally. We also apply a scale-invariant error to help measure depth relations rather than scale. By leveraging the raw datasets as large sources of training data, our method achieves state-of-the-art results on both NYU Depth and KITTI, and matches detailed depth boundaries without the need for superpixelation. 1 Introduction Estimating depth is an important component of understanding geometric relations within a scene. In turn, such relations help provide richer representations of objects and their environment, often leading to improvements in existing recognition tasks [18], as well as enabling many further applications such as 3D modeling [16, 6], physics and support models [18], robotics [4, 14], and potentially reasoning about occlusions. While there is much prior work on estimating depth based on stereo images or motion [17], there has been relatively little on estimating depth from a single image. Yet the monocular case often arises in practice: Potential applications include better understandings of the many images distributed on the web and social media outlets, real estate listings, and shopping sites. These include many examples of both indoor and outdoor scenes. There are likely several reasons why the monocular case has not yet been tackled to the same degree as the stereo one. Provided accurate image correspondences, depth can be recovered deterministically in the stereo case [5]. Thus, stereo depth estimation can be reduced to developing robust image point correspondences — which can often be found using local appearance features. By contrast, estimating depth from a single image requires the use of monocular depth cues such as line angles and perspective, object sizes, image position, and atmospheric effects. Furthermore, a global view of the scene may be needed to relate these effectively, whereas local disparity is sufficient for stereo. Moreover, the task is inherently ambiguous, and a technically ill-posed problem: Given an image, an infinite number of possible world scenes may have produced it. Of course, most of these are physically implausible for real-world spaces, and thus the depth may still be predicted with considerable accuracy. At least one major ambiguity remains, though: the global scale. Although extreme cases (such as a normal room versus a dollhouse) do not exist in the data, moderate variations in room and furniture sizes are present. We address this using a scale-invariant error in addition to more 1 common scale-dependent errors. This focuses attention on the spatial relations within a scene rather than general scale, and is particularly apt for applications such as 3D modeling, where the model is often rescaled during postprocessing. In this paper we present a new approach for estimating depth from a single image. We directly regress on the depth using a neural network with two components: one that first estimates the global structure of the scene, then a second that refines it using local information. The network is trained using a loss that explicitly accounts for depth relations between pixel locations, in addition to pointwise error. Our system achieves state-of-the art estimation rates on NYU Depth and KITTI, as well as improved qualitative outputs. 2 Related Work Directly related to our work are several approaches that estimate depth from a single image. Saxena et al. [15] predict depth from a set of image features using linear regression and a MRF, and later extend their work into the Make3D [16] system for 3D model generation. However, the system relies on horizontal alignment of images, and suffers in less controlled settings. Hoiem et al. [6] do not predict depth explicitly, but instead categorize image regions into geometric structures (ground, sky, vertical), which they use to compose a simple 3D model of the scene. More recently, Ladicky et al. [12] show how to integrate semantic object labels with monocular depth features to improve performance; however, they rely on handcrafted features and use superpixels to segment the image. Karsch et al. [7] use a kNN transfer mechanism based on SIFT Flow [11] to estimate depths of static backgrounds from single images, which they augment with motion information to better estimate moving foreground subjects in videos. This can achieve better alignment, but requires the entire dataset to be available at runtime and performs expensive alignment procedures. By contrast, our method learns an easier-to-store set of network parameters, and can be applied to images in real-time. More broadly, stereo depth estimation has been extensively investigated. Scharstein et al. [17] provide a survey and evaluation of many methods for 2-frame stereo correspondence, organized by matching, aggregation and optimization techniques. In a creative application of multiview stereo, Snavely et al. [20] match across views of many uncalibrated consumer photographs of the same scene to create accurate 3D reconstructions of common landmarks. Machine learning techniques have also been applied in the stereo case, often obtaining better results while relaxing the need for careful camera alignment [8, 13, 21, 19]. Most relevant to this work is Konda et al. [8], who train a factored autoencoder on image patches to predict depth from stereo sequences; however, this relies on the local displacements provided by stereo. There are also several hardware-based solutions for single-image depth estimation. Levin et al. [10] perform depth from defocus using a modified camera aperture, while the Kinect and Kinect v2 use active stereo and time-of-flight to capture depth. Our method makes indirect use of such sensors to provide ground truth depth targets during training; however, at test time our system is purely software-based, predicting depth from RGB images. 3 Approach 3.1 Model Architecture Our network is made of two component stacks, shown in Fig. 1. A coarse-scale network first predicts the depth of the scene at a global level. This is then refined within local regions by a fine-scale network. Both stacks are applied to the original input, but in addition, the coarse network’s output is passed to the fine network as additional first-layer image features. In this way, the local network can edit the global prediction to incorporate finer-scale details. 3.1.1 Global Coarse-Scale Network The task of the coarse-scale network is to predict the overall depth map structure using a global view of the scene. The upper layers of this network are fully connected, and thus contain the entire image in their field of view. Similarly, the lower and middle layers are designed to combine information from different parts of the image through max-pooling operations to a small spatial dimension. In so doing, the network is able to integrate a global understanding of the full scene to predict the depth. Such an understanding is needed in the single-image case to make effective use of cues such 2 9x9 conv 2 stride 2x2 pool 11x11 conv 4 stride 2x2 pool Fine 1 Coarse 1 5x5 conv 2x2 pool Coarse 2 96 64 Coarse 5 256 256 Coarse 6 4096 63 Concatenate 384 Coarse 4 Fine 3 Coarse Fine 4 Refined 3x3 conv full 3x3 conv 3x3 conv 5x5 conv full 1 1 64 Fine 2 5x5 conv Input 384 Coarse 3 Coarse 7 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 9x9 conv 2 stride 2x2 pool 11x11 conv 4 stride 2x2 pool Fine 1 Coarse 1 5x5 conv 2x2 pool Coarse 2 96 64 Coarse 5 256 256 Coarse 6 4096 63 Concatenate 384 Coarse 4 Fine 3 Coarse Fine 4 Refined 3x3 conv full 3x3 conv 3x3 conv 5x5 conv full 1 1 64 Fine 2 5x5 conv Input 384 Coarse 3 Coarse 7 Coarse Fine Layer input 1 2,3,4 5 6 7 1,2,3,4 Size (NYUDepth) 304x228 37x27 18x13 8x6 1x1 74x55 74x55 Size (KITTI) 576x172 71x20 35x9 17x4 1x1 142x27 142x27 Ratio to input /1 /8 /16 /32 – /4 /4 Figure 1: Model architecture. predict depth explicitly, but instead categorize image regions into geometric structures (ground, sky, vertical), which they use to compose a simple 3D model of the scene. More recently, Ladicky et al. [?] show how to integrate semantic object labels with monocular depth features to improve performance; however, they rely on handcrafted features and use superpixels to segment the image. Karsch et al. [?] use a kNN transfer mechanism based on SIFT Flow [?] to estimate depths of static backgrounds from single images, which they augment with motion information to better estimate moving foreground subjects in videos. This can achieve better alignment, but requires the entire dataset to be available at runtime and performs expensive alignment procedures. By contrast, our method learns an easier-to-store set of network parameters, and can be applied to images in real-time. More broadly, stereo depth estimation has been extensively investigated. Scharstein et al. [?] provide a survey and evaluation of many methods for 2-frame stereo correspondence methods, organized by matching, aggregation and optimization techniques. In a creative application of multiview stereo, Snavely et al. [?] match across views of many uncalibrated consumer photographs of the same scene to create accurate 3D reconstructions of common landmarks. Machine learning techniques have been applied in the stereo case, often obtaining better results while relaxing the need for careful camera alignment [?, ?, ?, ?]. Most relevant to this work is Konda et al. [?], who train a factored autoencoder on image patches to predict depth from stereo sequences; however, this relies on the local displacements provided by stereo. There are also several hardware-based solutions for single-image depth estimation. Levin et al. [?] perform depth from defocus using a modified camera aperature, while the Kinect and Kinect v2 use active stereo and time-of-flight to capture depth. Our method makes indirect use of such sensors to provide ground truth depth targets during training; however, at test time our system is purely software-based, predicting depth from RGB images only. 2 Figure 1: Model architecture. as vanishing points, object locations, and room alignment. A local view (as is commonly used for stereo matching) is insufficient to notice important features such as these. As illustrated in Fig. 1, the global, coarse-scale network contains five feature extraction layers of convolution and max-pooling, followed by two fully connected layers. The input, feature map and output sizes are also given in Fig. 1. The final output is at 1/4-resolution compared to the input (which is itself downsampled from the original dataset by a factor of 2), and corresponds to a center crop containing most of the input (as we describe later, we lose a small border area due to the first layer of the fine-scale network and image transformations). Note that the spatial dimension of the output is larger than that of the topmost convolutional feature map. Rather than limiting the output to the feature map size and relying on hardcoded upsampling before passing the prediction to the fine network, we allow the top full layer to learn templates over the larger area (74x55 for NYU Depth). These are expected to be blurry, but will be better than the upsampled output of a 8x6 prediction (the top feature map size); essentially, we allow the network to learn its own upsampling based on the features. Sample output weights are shown in Fig. 2 All hidden layers use rectified linear units for activations, with the exception of the coarse output layer 7, which is linear. Dropout is applied to the fully-connected hidden layer 6. The convolutional layers (1-5) of the coarse-scale network are pretrained on the ImageNet classification task [1] — while developing the model, we found pretraining on ImageNet worked better than initializing randomly, although the difference was not very large1. 3.1.2 Local Fine-Scale Network After taking a global perspective to predict the coarse depth map, we make local refinements using a second, fine-scale network. The task of this component is to edit the coarse prediction it receives to align with local details such as object and wall edges. The fine-scale network stack consists of convolutional layers only, along with one pooling stage for the first layer edge features. While the coarse network sees the entire scene, the field of view of an output unit in the fine network is 45x45 pixels of input. The convolutional layers are applied across feature maps at the target output size, allowing a relatively high-resolution output at 1/4 the input scale. More concretely, the coarse output is fed in as an additional low-level feature map. By design, the coarse prediction is the same spatial size as the output of the first fine-scale layer (after pooling), 1When pretraining, we stack two fully connected layers with 4096 - 4096 - 1000 output units each, with dropout applied to the two hidden layers, as in [9]. We train the network using random 224x224 crops from the center 256x256 region of each training image, rescaled so the shortest side has length 256. This model achieves a top-5 error rate of 18.1% on the ILSVRC2012 validation set, voting with 2 flips and 5 translations per image. 3 (a) (b) Figure 2: Weight vectors from layer Coarse 7 (coarse output), for (a) KITTI and (b) NYUDepth. Red is positive (farther) and blue is negative (closer); black is zero. Weights are selected uniformly and shown in descending order by l2 norm. KITTI weights often show changes in depth on either side of the road. NYUDepth weights often show wall positions and doorways. and we concatenate the two together (Fine 2 in Fig. 1). Subsequent layers maintain this size using zero-padded convolutions. All hidden units use rectified linear activations. The last convolutional layer is linear, as it predicts the target depth. We train the coarse network first against the ground-truth targets, then train the fine-scale network keeping the coarse-scale output fixed (i.e. when training the fine network, we do not backpropagate through the coarse one). 3.2 Scale-Invariant Error The global scale of a scene is a fundamental ambiguity in depth prediction. Indeed, much of the error accrued using current elementwise metrics may be explained simply by how well the mean depth is predicted. For example, Make3D trained on NYUDepth obtains 0.41 error using RMSE in log space (see Table 1). However, using an oracle to substitute the mean log depth of each prediction with the mean from the corresponding ground truth reduces the error to 0.33, a 20% relative improvement. Likewise, for our system, these error rates are 0.28 and 0.22, respectively. Thus, just finding the average scale of the scene accounts for a large fraction of the total error. Motivated by this, we use a scale-invariant error to measure the relationships between points in the scene, irrespective of the absolute global scale. For a predicted depth map y and ground truth y∗, each with n pixels indexed by i, we define the scale-invariant mean squared error (in log space) as D(y, y∗) = 1 2n n i=1 (log yi −log y∗ i + α(y, y∗))2, (1) where α(y, y∗) = 1 n i(log y∗ i −log yi) is the value of α that minimizes the error for a given (y, y∗). For any prediction y, eα is the scale that best aligns it to the ground truth. All scalar multiples of y have the same error, hence the scale invariance. Two additional ways to view this metric are provided by the following equivalent forms. Setting di = log yi −log y∗ i to be the difference between the prediction and ground truth at pixel i, we have D(y, y∗) = 1 2n2 i,j (log yi −log yj) −(log y∗ i −log y∗ j ) 2 (2) = 1 n i d2 i −1 n2 i,j didj = 1 n i d2 i −1 n2 i di 2 (3) Eqn. 2 expresses the error by comparing relationships between pairs of pixels i, j in the output: to have low error, each pair of pixels in the prediction must differ in depth by an amount similar to that of the corresponding pair in the ground truth. Eqn. 3 relates the metric to the original l2 error, but with an additional term, −1 n2 ij didj, that credits mistakes if they are in the same direction and penalizes them if they oppose. Thus, an imperfect prediction will have lower error when its mistakes are consistent with one another. The last part of Eqn. 3 rewrites this as a linear-time computation. In addition to the scale-invariant error, we also measure the performance of our method according to several error metrics have been proposed in prior works, as described in Section 4. 3.3 Training Loss In addition to performance evaluation, we also tried using the scale-invariant error as a training loss. Inspired by Eqn. 3, we set the per-sample training loss to 4 L(y, y∗) = 1 n i d2 i −λ n2 i di 2 (4) where di = log yi −log y∗ i and λ ∈[0, 1]. Note the output of the network is log y; that is, the final linear layer predicts the log depth. Setting λ = 0 reduces to elementwise l2, while λ = 1 is the scale-invariant error exactly. We use the average of these, i.e. λ = 0.5, finding that this produces good absolute-scale predictions while slightly improving qualitative output. During training, most of the target depth maps will have some missing values, particularly near object boundaries, windows and specular surfaces. We deal with these simply by masking them out and evaluating the loss only on valid points, i.e. we replace n in Eqn. 4 with the number of pixels that have a target depth, and perform the sums excluding pixels i that have no depth value. 3.4 Data Augmentation We augment the training data with random online transformations (values shown for NYUDepth) 2: • Scale: Input and target images are scaled by s ∈[1, 1.5], and the depths are divided by s. • Rotation: Input and target are rotated by r ∈[−5, 5] degrees. • Translation: Input and target are randomly cropped to the sizes indicated in Fig. 1. • Color: Input values are multiplied globally by a random RGB value c ∈[0.8, 1.2]3. • Flips: Input and target are horizontally flipped with 0.5 probability. Note that image scaling and translation do not preserve the world-space geometry of the scene. This is easily corrected in the case of scaling by dividing the depth values by the scale s (making the image s times larger effectively moves the camera s times closer). Although translations are not easily fixed (they effectively change the camera to be incompatible with the depth values), we found that the extra data they provided benefited the network even though the scenes they represent were slightly warped. The other transforms, flips and in-plane rotation, are geometry-preserving. At test time, we use a single center crop at scale 1.0 with no rotation or color transforms. 4 Experiments We train our model on the raw versions both NYU Depth v2 [18] and KITTI [3]. The raw distributions contain many additional images collected from the same scenes as in the more commonly used small distributions, but with no preprocessing; in particular, points for which there is no depth value are left unfilled. However, our model’s natural ability to handle such gaps as well as its demand for large training sets make these fitting sources of data. 4.1 NYU Depth The NYU Depth dataset [18] is composed of 464 indoor scenes, taken as video sequences using a Microsoft Kinect camera. We use the official train/test split, using 249 scenes for training and 215 for testing, and construct our training set using the raw data for these scenes. RGB inputs are downsampled by half, from 640x480 to 320x240. Because the depth and RGB cameras operate at different variable frame rates, we associate each depth image with its closest RGB image in time, and throw away frames where one RGB image is associated with more than one depth (such a oneto-many mapping is not predictable). We use the camera projections provided with the dataset to align RGB and depth pairs; pixels with no depth value are left missing and are masked out. To remove many invalid regions caused by windows, open doorways and specular surfaces we also mask out depths equal to the minimum or maximum recorded for each image. The training set has 120K unique images, which we shuffle into a list of 220K after evening the scene distribution (1200 per scene). We test on the 694-image NYU Depth v2 test set (with filled-in depth values). We train the coarse network for 2M samples using SGD with batches of size 32. We then hold it fixed and train the fine network for 1.5M samples (given outputs from the alreadytrained coarse one). Learning rates are: 0.001 for coarse convolutional layers 1-5, 0.1 for coarse full layers 6 and 7, 0.001 for fine layers 1 and 3, and 0.01 for fine layer 2. These ratios were found by trial-and-error on a validation set (folded back into the training set for our final evaluations), and the global scale of all the rates was tuned to a factor of 5. Momentum was 0.9. Training took 38h for the coarse network and 26h for fine, for a total of 2.6 days using a NVidia GTX Titan Black. Test prediction takes 0.33s per batch (0.01s/image). 2For KITTI, s ∈[1, 1.2], and rotations are not performed (images are horizontal from the camera mount). 5 4.2 KITTI The KITTI dataset [3] is composed of several outdoor scenes captured while driving with carmounted cameras and depth sensor. We use 56 scenes from the “city,” “residential,” and “road” categories of the raw data. These are split into 28 for training and 28 for testing. The RGB images are originally 1224x368, and downsampled by half to form the network inputs. The depth for this dataset is sampled at irregularly spaced points, captured at different times using a rotating LIDAR scanner. When constructing the ground truth depths for training, there may be conflicting values; since the RGB cameras shoot when the scanner points forward, we resolve conflicts at each pixel by choosing the depth recorded closest to the RGB capture time. Depth is only provided within the bottom part of the RGB image, however we feed the entire image into our model to provide additional context to the global coarse-scale network (the fine network sees the bottom crop corresponding to the target area). The training set has 800 images per scene. We exclude shots where the car is stationary (acceleration below a threshold) to avoid duplicates. Both left and right RGB cameras are used, but are treated as unassociated shots. The training set has 20K unique images, which we shuffle into a list of 40K (including duplicates) after evening the scene distribution. We train the coarse model first for 1.5M samples, then the fine model for 1M. Learning rates are the same as for NYU Depth. Training took took 30h for the coarse model and 14h for fine; test prediction takes 0.40s/batch (0.013s/image). 4.3 Baselines and Comparisons We compare our method against Make3D trained on the same datasets, as well as the published results of other current methods [12, 7]. As an additional reference, we also compare to the mean depth image computed across the training set. We trained Make3D on KITTI using a subset of 700 images (25 per scene), as the system was unable to scale beyond this size. Depth targets were filled in using the colorization routine in the NYUDepth development kit. For NYUDepth, we used the common distribution training set of 795 images. We evaluate each method using several errors from prior works, as well as our scale-invariant metric: Threshold: % of yi s.t. max( yi y∗ i , y∗ i yi ) = δ < thr RMSE (linear): 1 |T | y∈T ||yi −y∗ i ||2 Abs Relative difference: 1 |T | y∈T |y −y∗|/y∗ RMSE (log): 1 |T | y∈T || log yi −log y∗ i ||2 Squared Relative difference: 1 |T | y∈T ||y −y∗||2/y∗ RMSE (log, scale-invariant): The error Eqn. 1 Note that the predictions from Make3D and our network correspond to slightly different center crops of the input. We compare them on the intersection of their regions, and upsample predictions to the full original input resolution using nearest-neighbor. Upsampling negligibly affects performance compared to downsampling the ground truth and evaluating at the output resolution. 3 5 Results 5.1 NYU Depth Results for NYU Depth dataset are provided in Table 1. As explained in Section 4.3, we compare against the data mean and Make3D as baselines, as well as Karsch et al. [7] and Ladicky et al. [12]. (Ladicky et al. uses a joint model which is trained using both depth and semantic labels). Our system achieves the best performance on all metrics, obtaining an average 35% relative gain compared to the runner-up. Note that our system is trained using the raw dataset, which contains many more example instances than the data used by other approaches, and is able to effectively leverage it to learn relevant features and their associations. This dataset breaks many assumptions made by Make3D, particularly horizontal alignment of the ground plane; as a result, Make3D has relatively poor performance in this task. Importantly, our method improves over it on both scale-dependent and scale-invariant metrics, showing that our system is able to predict better relations as well as better means. Qualitative results are shown on the left side of Fig. 4, sorted top-to-bottom by scale-invariant MSE. Although the fine-scale network does not improve in the error measurements, its effect is clearly visible in the depth maps — surface boundaries have sharper transitions, aligning to local details. However, some texture edges are sometimes also included. Fig. 3 compares Make3D as well as 3On NYUDepth, log RMSE is 0.285 vs 0.286 for upsampling and downsampling, respectively, and scaleinvariant RMSE is 0.219 vs 0.221. The intersection is 86% of the network region and 100% of Make3D for NYUDepth, and 100% of the network and 82% of Make3D for KITTI. 6 Mean Make3D Ladicky&al Karsch&al Coarse Coarse + Fine threshold δ < 1.25 0.418 0.447 0.542 – 0.618 0.611 higher threshold δ < 1.252 0.711 0.745 0.829 – 0.891 0.887 is threshold δ < 1.253 0.874 0.897 0.940 – 0.969 0.971 better abs relative difference 0.408 0.349 – 0.350 0.228 0.215 sqr relative difference 0.581 0.492 – – 0.223 0.212 lower RMSE (linear) 1.244 1.214 – 1.2 0.871 0.907 is RMSE (log) 0.430 0.409 – – 0.283 0.285 better RMSE (log, scale inv.) 0.304 0.325 – – 0.221 0.219 Table 1: Comparison on the NYUDepth dataset input m3d coarse L2 L2 scale-inv ground truth input m3d coarse L2 sc.-inv g.truth Figure 3: Qualitative comparison of Make3D, our method trained with l2 loss (λ = 0), and our method trained with both l2 and scale-invariant loss (λ = 0.5). outputs from our network trained using losses with λ = 0 and λ = 0.5. While we did not observe numeric gains using λ = 0.5, it did produce slight qualitative improvements in more detailed areas. 5.2 KITTI We next examine results on the KITTI driving dataset. Here, the Make3D baseline is well-suited to the dataset, being composed of horizontally aligned images, and achieves relatively good results. Still, our method improves over it on all metrics, by an average 31% relative gain. Just as importantly, there is a 25% gain in both the scale-dependent and scale-invariant RMSE errors, showing there is substantial improvement in the predicted structure. Again, the fine-scale network does not improve much over the coarse one in the error metrics, but differences between the two can be seen in the qualitative outputs. The right side of Fig. 4 shows examples of predictions, again sorted by error. The fine-scale network produces sharper transitions here as well, particularly near the road edge. However, the changes are somewhat limited. This is likely caused by uncorrected alignment issues between the depth map and input in the training data, due to the rotating scanner setup. This dissociates edges from their true position, causing the network to average over their more random placements. Fig. 3 shows Make3D performing much better on this data, as expected, while using the scale-invariant error as a loss seems to have little effect in this case. Mean Make3D Coarse Coarse + Fine threshold δ < 1.25 0.556 0.601 0.679 0.692 higher threshold δ < 1.252 0.752 0.820 0.897 0.899 is threshold δ < 1.253 0.870 0.926 0.967 0.967 better abs relative difference 0.412 0.280 0.194 0.190 sqr relative difference 5.712 3.012 1.531 1.515 lower RMSE (linear) 9.635 8.734 7.216 7.156 is RMSE (log) 0.444 0.361 0.273 0.270 better RMSE (log, scale inv.) 0.359 0.327 0.248 0.246 Table 2: Comparison on the KITTI dataset. 6 Discussion Predicting depth estimates from a single image is a challenging task. Yet by combining information from both global and local views, it can be performed reasonably well. Our system accomplishes this through the use of two deep networks, one that estimates the global depth structure, and another that refines it locally at finer resolution. We achieve a new state-of-the-art on this task for NYU Depth and KITTI datasets, having effectively leveraged the full raw data distributions. In future work, we plan to extend our method to incorporate further 3D geometry information, such as surface normals. Promising results in normal map prediction have been made by Fouhey et al. [2], and integrating them along with depth maps stands to improve overall performance [16]. We also hope to extend the depth maps to the full original input resolution by repeated application of successively finer-scaled local networks. 7 !"# !$# !%# !&# !"# !$# !%# !&# Figure 4: Example predictions from our algorithm. NYUDepth on left, KITTI on right. For each image, we show (a) input, (b) output of coarse network, (c) refined output of fine network, (d) ground truth. The fine scale network edits the coarse-scale input to better align with details such as object boundaries and wall edges. Examples are sorted from best (top) to worst (bottom). Acknowledgements The authors are grateful for support from ONR #N00014-13-1-0646, NSF #1116923, #1149633 and Microsoft Research. 8 References [1] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009. [2] D. F. Fouhey, A. Gupta, and M. Hebert. Data-driven 3d primitives for single image understanding. In ICCV, 2013. [3] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun. Vision meets robotics: The kitti dataset. International Journal of Robotics Research (IJRR), 2013. [4] R. Hadsell, P. Sermanet, J. Ben, A. Erkan, M. Scoffier, K. Kavukcuoglu, U. Muller, and Y. LeCun. Learning long-range vision for autonomous off-road driving. Journal of Field Robotics, 26(2):120–144, 2009. [5] R. I. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, ISBN: 0521540518, second edition, 2004. [6] D. Hoiem, A. A. Efros, and M. Hebert. Automatic photo pop-up. In ACM SIGGRAPH, pages 577–584, 2005. [7] K. Karsch, C. Liu, S. B. Kang, and N. England. Depth extraction from video using nonparametric sampling. In TPAMI, 2014. [8] K. Konda and R. Memisevic. Unsupervised learning of depth and motion. In arXiv:1312.3429v2, 2013. [9] A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012. [10] A. Levin, R. Fergus, F. Durand, and W. T. Freeman. Image and depth from a conventional camera with a coded aperture. In SIGGRAPH, 2007. [11] C. Liu, J. Yuen, A. Torralba, J. Sivic, and W. Freeman. Sift flow: dense correspondence across difference scenes. 2008. [12] M. P. Lubor Ladicky, Jianbo Shi. Pulling things out of perspective. In CVPR, 2014. [13] R. Memisevic and C. Conrad. Stereopsis via deep learning. In NIPS Workshop on Deep Learning, 2011. [14] J. Michels, A. Saxena, and A. Y. Ng. High speed obstacle avoidance using monocular vision and reinforcement learning. In ICML, pages 593–600, 2005. [15] A. Saxena, S. H. Chung, and A. Y. Ng. Learning depth from single monocular images. In NIPS, 2005. [16] A. Saxena, M. Sun, and A. Y. Ng. Make3d: Learning 3-d scene structure from a single still image. TPAMI, 2008. [17] D. Scharstein and R. Szeliski. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. IJCV, 47:7–42, 2002. [18] N. Silberman, D. Hoiem, P. Kohli, and R. Fergus. Indoor segmentation and support inference from rgbd images. In ECCV, 2012. [19] F. H. Sinz, J. Q. Candela, G. H. Bakır, C. E. Rasmussen, and M. O. Franz. Learning depth from stereo. In Pattern Recognition, pages 245–252. Springer, 2004. [20] N. Snavely, S. M. Seitz, and R. Szeliski. Photo tourism: Exploring photo collections in 3d. 2006. [21] K. Yamaguchi, T. Hazan, D. Mcallester, and R. Urtasun. Continuous markov random fields for robust stereo estimation. In arXiv:1204.1393v1, 2012. 9
|
2014
|
188
|
5,278
|
Provable Submodular Minimization using Wolfe’s Algorithm Deeparnab Chakrabarty∗ Prateek Jain∗ Pravesh Kothari† Abstract Owing to several applications in large scale learning and vision problems, fast submodular function minimization (SFM) has become a critical problem. Theoretically, unconstrained SFM can be performed in polynomial time [10, 11]. However, these algorithms are typically not practical. In 1976, Wolfe [21] proposed an algorithm to find the minimum Euclidean norm point in a polytope, and in 1980, Fujishige [3] showed how Wolfe’s algorithm can be used for SFM. For general submodular functions, this Fujishige-Wolfe minimum norm algorithm seems to have the best empirical performance. Despite its good practical performance, very little is known about Wolfe’s minimum norm algorithm theoretically. To our knowledge, the only result is an exponential time analysis due to Wolfe [21] himself. In this paper we give a maiden convergence analysis of Wolfe’s algorithm. We prove that in t iterations, Wolfe’s algorithm returns an O(1/t)-approximate solution to the min-norm point on any polytope. We also prove a robust version of Fujishige’s theorem which shows that an O(1/n2)approximate solution to the min-norm point on the base polytope implies exact submodular minimization. As a corollary, we get the first pseudo-polynomial time guarantee for the Fujishige-Wolfe minimum norm algorithm for unconstrained submodular function minimization. 1 Introduction An integer-valued1 function f : 2X →Z defined over subsets of some finite ground set X of n elements is submodular if it satisfies the following diminishing marginal returns property: for every S ⊆T ⊆X and i ∈X \ T, f(S ∪{i}) −f(S) ≥f(T ∪{i}) −f(T). Submodularity arises naturally in several applications such as image segmentation [17], sensor placement [18], etc. where minimizing an arbitrary submodular function is an important primitive. In submodular function minimization (SFM), we assume access to an evaluation oracle for f which for any subset S ⊆X returns the value f(S). We denote the time taken by the oracle to answer a single query as EO. The objective is to find a set T ⊆X satisfying f(T) ≤f(S) for every S ⊆X. In 1981, Grotschel, Lovasz and Schrijver [8] demonstrated the first polynomial time algorithm for SFM using the ellipsoid algorithm. This algorithm, however, is practically infeasible due to the running time and the numerical issues in implementing the ellipsoid algorithm. In 2001, Schrijver [19] and Iwata et al. [9] independently designed combinatorial polynomial time algorithms for SFM. Currently, the best algorithm is by Iwata and Orlin [11] with a running time of O(n5EO+n6). However, from a practical stand point, none of the provably polynomial time algorithms exhibit good performance on instances of SFM encountered in practice (see §4). This, along with the widespread applicability of SFM in machine learning, has inspired a large body of work on practically fast procedures (see [1] for a survey). But most of these procedures focus either on special submodular ∗Microsoft Research, 9 Lavelle Road, Bangalore 560001. †University of Texas at Austin (Part of the work done while interning at Microsoft Research) 1One can assume any function is integer valued after suitable scaling. 1 functions such as decomposable functions [16, 20] or on constrained SFM problems [13, 12, 15, 14]. Fujishige-Wolfe’s Algorithm for SFM: For any submodular function f, the base polytope Bf of f is defined as follows: Bf = {x ∈Rn : x(A) ≤f(A), ∀A ⊂X, and x(X) = f(X)}, (1) where x(A) := P i∈A xi and xi is the i-th coordinate of x ∈Rn. Fujishige [3] showed that if one can obtain the minimum norm point on the base polytope, then one can solve SFM. Finding the minimum norm point, however, is a non-trivial problem; at present, to our knowledge, the only polynomial time algorithm known is via the ellipsoid method. Wolfe [21] described an iterative procedure to find minimum norm points in polytopes as long as linear functions could be (efficiently) minimized over them. Although the base polytope has exponentially many constraints, a simple greedy algorithm can minimize any linear function over it. Therefore using Wolfe’s procedure on the base polytope coupled with Fujishige’s theorem becomes a natural approach to SFM. This was suggested as early as 1984 in Fujishige [4] and is now called the Fujishige-Wolfe algorithm for SFM. This approach towards SFM was revitalized in 2006 when Fujishige and Isotani [6, 7] announced encouraging computational results regarding the minimum norm point algorithm. In particular, this algorithm significantly out-performed all known provably polynomial time algorithms. Theoretically, however, little is known regarding the convergence of Wolfe’s procedure except for the finite, but exponential, running time Wolfe himself proved. Nor is the situation any better for its application on the base polytope. Given the practical success, we believe this is an important, and intriguing, theoretical challenge. In this work, we make some progress towards analyzing the Fujishige-Wolfe method for SFM and, in fact, Wolfe’s algorithm in general. In particular, we prove the following two results: • We prove (in Theorem 4) that for any polytope B, Wolfe’s algorithm converges to an εapproximate solution, in O(1/ε) steps. More precisely, in O(nQ2/ε) iterations, Wolfe’s algorithm returns a point ∥x∥2 2 ≤∥x∗∥2 2 + ε, where Q = maxp∈B ∥p∥2. • We prove (in Theorem 5) a robust version of a theorem by Fujishige [3] relating min-norm points on the base polytope to SFM. In particular, we prove that an approximate min-norm point solution provides an approximate solution to SFM as well. More precisely, if x satisfies ∥x∥2 2 ≤zT x + ε2 for all z ∈Bf, then, f(Sx) ≤minS f(S) + 2nε, where Sx can be constructed efficiently using x. Together, these two results gives us our main result which is a pseudopolynomial bound on the running time of the Fujishige-Wolfe algorithm for submodular function minimization. Theorem 1. (Main Result.) Fix a submodular function f : 2X →Z. The FujishigeWolfe algorithm returns the minimizer of f in O((n5EO + n7)F 2) time where F := maxn i=1 (|f({i})|, |f([n]) −f([n] \ i)|). Our analysis suggests that the Fujishige-Wolfe’s algorithm is dependent on F and has worse dependence on n than the Iwata-Orlin [11] algorithm. To verify this, we conducted empirical study on several standard SFM problems. However, for the considered benchmark functions, running time of Fujishige-Wolfe’s algorithm seemed to be independent of F and exhibited better dependence on n than the Iwata-Orlin algorithm. This is described in §4. 2 Preliminaries: Submodular Functions and Wolfe’s Algorithm 2.1 Submodular Functions and SFM Given a ground set X on n elements, without loss of generality we think of it as the first n integers [n] := {1, 2, . . . , n}. f be a submodular function. Since submodularity is translation invariant, we assume f(∅) = 0. For a submodular function f, we write Bf ⊆Rn for the associated base polyhedron of f defined in (1). Given x ∈Rn, one can find the minimum value of q⊤x over q ∈Bf in O(n log n + nEO) time using the following greedy algorithm: Renumber indices such that x1 ≤· · · ≤xn. Set q∗ i = f([i]) −f([i −1]). Then, it can be proved that q∗∈Bf and is the minimizer of the x⊤q for q ∈Bf. The connection between the SFM problem and the base polytope was first established in the following minimax theorem of Edmonds [2]. 2 Theorem 2 (Edmonds [2]). Given any submodular function f with f(∅) = 0, we have min S⊆[n] f(S) = max x∈Bf X i:xi<0 xi ! The following theorem of Fujishige [3] shows the connection between finding the minimum norm point in the base polytope Bf of a submodular function f and the problem of SFM on input f. This forms the basis of Wolfe’s algorithm. In §3.2, we prove a robust version of this theorem. Theorem 3 (Fujishige’s Theorem [3]). Let f : 2[n] →Z be a submodular function and let Bf be the associated base polyhedron. Let x∗be the optimal solution to minx∈Bf ||x||. Define S = {i | x∗ i < 0}. Then, f(S) ≤f(T) for every T ⊆[n]. 2.2 Wolfe’s Algorithm for Minimum Norm Point of a polytope. We now present Wolfe’s algorithm for computing the minimum-norm point in an arbitrary polytope B ⊆Rn. We assume a linear optimization oracle (LO) which takes input a vector x ∈Rn and outputs a vector q ∈arg minp∈B x⊤p. We start by recalling some definitions. The affine hull of a finite set S ⊆Rn is aff(S) = {y | y = P z∈S αz · z, P z∈S αz = 1}. The affine minimizer of S is defined as y = arg minz∈aff(S) ||z||2, and y satisfies the following affine minimizer property: for any v ∈aff(S), v⊤y = ||y||2. The procedure AffineMinimizer(S) returns (y, α) where y is the affine minimizer and α = (αs)s∈S is the set of coefficients expressing y as an affine combination of points in S. This procedure can be naively implemented in O(|S|3 + n|S|2) as follows. Let B be the n × |S| matrix where each column in a point in S. Then α = (B⊤B)−11/1⊤(B⊤B)−11 and y = Bα. Algorithm 1 Wolfe’s Algorithm 1. Let q be an arbitrary vertex of B. Initialize x ←q. We always maintain x = P i∈S λiqi as a convex combination of a subset S of vertices of B. Initialize S = {q} and λ1 = 1. 2. WHILE(true): (MAJOR CYCLE) (a) q := LO(x). // Linear Optimization: q ∈arg minp∈B x⊤p. (b) IF ||x||2 ≤x⊤q + ε2 THEN break. // Termination Condition. Output x. (c) S := S ∪{q}. (d) WHILE(true): (MINOR CYCLE) i. (y, α) = AffineMinimizer(S). //y = arg minz∈aff(S) ||z||. ii. IF αi ≥0 for all i THEN break. //If y ∈conv(S), then end minor loop. iii. ELSE // If y /∈conv(S), then update x to the intersection of the boundary of conv(S) and the segment joining y and previous x. Delete points from S which are not required to describe the new x as a convex combination. θ := mini:αi<0 λi/(λi −αi) // Recall, x = P i λiqi. Update x ←θy + (1 −θ)x. // By definition of θ, the new x lies in conv(S). Update λi ←θαi + (1 −θ)λi. //This sets the coefficients of the new x S = {i : λi > 0}. // Delete points which have λi = 0. This deletes at least one point. (e) Update x ←y. // After the minor loop terminates, x is updated to be the affine minimizer of the current set S. 3. RETURN x. When ε = 0, the algorithm on termination (if it terminates) returns the minimum norm point in B since ||x||2 ≤x⊤x∗≤||x|| · ||x∗||. For completeness, we sketch Wolfe’s argument in [21] of finite termination. Note that |S| ≤n always; otherwise the affine minimizer is 0 which either terminates the program or starts a minor cycle which decrements |S|. Thus, the number of minor cycles in a major cycle ≤n, and it suffices to bound the number of major cycles. Each major cycle is associated with a set S whose affine minimizer, which is the current x, lies in the convex hull of S. Wolfe calls such sets corrals. Next, we show that ||x|| strictly decreases across iterations (major or minor cycle) of the algorithm, which proves that no corral repeats, thus bounding the number of major cycles by the number of corrals. The latter is at most N n , where N is the number of vertices of B. Consider iteration j which starts with xj and ends with xj+1. Let Sj be the set S at the beginning of iteration j. If the iteration is a major cycle, then xj+1 is the affine minimizer of Sj ∪{qj} 3 where qj = LO(xj). Since x⊤ j qj < ||xj||2 (the algorithm doesn’t terminate in iteration j) and x⊤ j+1qj = ||xj+1||2 (affine minimizer property), we get xj ̸= xj+1, and so ||xj+1|| < ||xj|| (since the affine minimizer is unique). If the iteration is a minor cycle, then xj+1 = θxj + (1 −θ)yj, where yj is the affine minimizer of Sj and θ < 1. Since ||yj|| < ||xj|| (yj ̸= xj since yj /∈conv(Sj)), we get ||xj+1|| < ||xj||. 3 Analysis Our refined analysis of Wolfe’s algorithm is encapsulated in the following theorem. Theorem 4. Let B be an arbitrary polytope such that the maximum Euclidean norm of any vertex of B is at most Q. After O(nQ2/ε2) iterations, Wolfe’s algorithm returns a point x ∈B which satisfies ||x||2 ≤x⊤q + ε2, for all points q ∈B. In particular, this implies ||x||2 ≤||x∗||2 + 2ε2. The above theorem shows that Wolfe’s algorithm converges to the minimum norm point at an 1/t-rate. We stress that the above is for any polytope. To apply this to SFM, we prove the following robust version of Fujishige’s theorem connecting the minimum norm point in the base polytope and the set minimizing the submodular function value. Theorem 5. Fix a submodular function f with base polytope Bf. Let x ∈Bf be such that ||x||2 ≤ x⊤q + ε2 for all q ∈Bf. Renumber indices such that x1 ≤· · · ≤xn. Let S = {1, 2, . . . , k},where k is smallest index satisfying (C1) xk+1 ≥0 and (C2) xk+1 −xk ≥ε/n. Then, f(S) ≤f(T) + 2nε for any subset T ⊆S. In particular, if ε = 1 4n and f is integer-valued, then S is a minimizer. Theorem 4 and Theorem 5 implies our main theorem. Theorem 1. (Main Result.) Fix a submodular function f : 2X →Z. The FujishigeWolfe algorithm returns the minimizer of f in O((n5EO + n7)F 2) time where F := maxn i=1 (|f({i})|, |f([n]) −f([n] \ i)|). Proof. The vertices of Bf are well understood: for every permutation σ of [n], we have a vertex with xσ(i) = f({σ(1), . . . , σ(i)}) −f({σ(1), . . . , σ(i −1)}). By submodularity of f, we get for all i, |xi| ≤F. Therefore, for any point x ∈Bf, ||x||2 ≤nF 2. Choose ε = 1/4n. From Theorem 4 we know that if we run O(n4F 2) iterations of Wolfe, we will get a point x ∈Bf such that ||x||2 ≤x⊤q + ε2 for all q ∈Bf. Theorem 5 implies this solves the SFM problem. The running time for each iteration is dominated by the time for the subroutine to compute the affine minimizer of S which is at most O(n3), and the linear optimization oracle. For Bf, LO(x) can be implemented in O(n log n + nEO) time. This proves the theorem. We prove Theorem 4 and Theorem 5 in §3.1 and §3.2, respectively. 3.1 Analysis of Wolfe’s Min-norm Point Algorithm The stumbling block in the analysis of Wolfe’s algorithm is the interspersing of major and minor cycles which oscillates the size of S preventing it from being a good measure of progress. Instead, in our analysis, we use the norm of x as the measure of progress. Already we have seen that ||x|| strictly decreases. It would be nice to quantify how much the decrease is, say, across one major cycle. This, at present, is out of our reach even for major cycles which contain two or more minor cycles in them. However, we can prove significant drop in norm in major cycles which have at most one minor cycle in them. We call such major cycles good. The next easy, but very useful, observation is the following: one cannot have too many bad major cycles without having too many good major cycles. Lemma 1. In any consecutive 3n + 1 iterations, there exists at least one good major cycle. Proof. Consider a run of r iterations where all major cycles are bad, and therefore contain ≥2 minor cycles. Say there are k major cycles and r −k minor cycles, and so r −k ≥2k implying r ≥3k. Let SI be the set S at the start of these iterations and SF be the set at the end. We have |SF | ≤|SI| + k −(r −k) ≤|SI| + 2k −r ≤n −r 3. Therefore, r ≤3n, since |SF | ≥0. Before proceeding, we introduce some notation. Definition 1. Given a point x ∈B, let us denote err(x) := ||x||2 −||x∗||2. Given a point x and q, let ∆(x, q) := ||x||2 −x⊤q and let ∆(x) := maxq∈B ∆(x, q) = ||x||2 −minq∈B x⊤q. Observe that ∆(x) ≥err(x)/2 since ∆(x) ≥||x||2 −x⊤x∗≥(||x||2 −||x∗||2)/2. 4 We now use t to index all good major cycles. Let xt be the point x at the beginning of the t-th good major cycle. The next theorem shows that the norm significantly drops across good major cycles. Theorem 6. For t iterating over good major cycles, err(xt) −err(xt+1) ≥∆2(xt)/8Q2. We now complete the proof of Theorem 4 using Theorem 6. Proof of Theorem 4. Using Theorem 6, we get that err(xt) −err(xt+1) ≥err(xt)2/32Q2 since ∆(x) ≥err(x)/2 for all x. We claim that in t∗≤64Q2/ε2 good major cycles, we reach xt with err(xt∗) ≤ε2. To see this rewrite as follows: err(xt+1) ≤err(xt) 1 −err(xt) 32Q2 , for all t. Now let e0 := err(x0). Define t0, t1, . . . such that for all k ≥1 we have err(xt) > e0/2k for t ∈[tk−1, tk). That is, tk is the first time t at which err(xt) ≤e0/2k. Note that for t ∈[tk−1, tk), we have err(xt+1) ≤err(xt) 1 − e0 32Q22k . This implies in 32Q22k/e0 time units after tk−1, we will have err(xt) ≤err(xtk−1)/2; we have used the fact that (1 −δ)1/δ < 1/2 when δ < 1/32. That is, tk ≤tk−1 + 32Q22k/e0. We are interested in t∗= tK where 2K = e0/ε2. We get t∗≤32Q2 e0 1 + 2 + · · · + 2K ≤64Q22K/e0 = 64Q2/ε2. Next, we claim that in t∗∗< t∗+ t′ good major cycles, where t′ = 8Q2/ε2, we obtain an xt∗∗ with ∆(xt∗∗) ≤ε2. This is because, if not, then, using Theorem 6, in each of the good major cycles t∗+ 1, t∗+ 2, . . . t∗+ t′, err(x) falls additively by > ε4/8Q2 and thus err(xt∗+t′) < err(xt∗) −ε2 ≤0, which is a contradiction. Therefore, in O(Q2/ε2) good major cycles, the algorithm obtains an x = xt∗∗with ∆(x) ≤ε2, proving Theorem 4. The rest of this subsection is dedicated to proving Theorem 6. Proof of Theorem 6: We start off with a simple geometric lemma. Lemma 2. Let S be a subset of Rn and suppose y is the minimum norm point of aff(S). Let x and q be arbitrary points in aff(S). Then, ||x||2 −||y||2 ≥∆(x, q)2 4Q2 (2) where Q is an upper bound on ||x||, ||q||. Proof. Since y is the minimum norm point in aff(S), we have x⊤y = q⊤y = ||y||2. In particular, ||x −y||2 = ||x||2 −||y||2. Therefore, ∆(x, q) = ∥x∥2 −xT q = ∥x∥2 −x⊤y + y⊤q −xT q = (y −x)T (q −x) ≤∥y −x∥· ∥q −x∥ ≤∥y −x∥(∥x∥+ ∥q∥) ≤2Q∥y −x∥, where the first inequality is Cauchy-Schwartz and the second is triangle inequality. Lemma now follows by taking square of the above expression and by observing that ∥y −x∥2 = ∥x∥2 −∥y∥2. The above lemma takes case of major cycles with no minor cycles in them. Lemma 3 (Progress in Major Cycle with no Minor Cycles). Let t be the index of a good major cycle with no minor cycles. Then err(xt) −err(xt+1) ≥∆2(xt)/4Q2. Proof. Let St be the set S at start of the tth good major cycle, and let qt be the point minimizing x⊤ t q. Let S = St ∪qt and let y be the minimum norm point in aff(S). Since there are no minor cycles, y ∈conv(S). Abuse notation and let xt+1 = y be the iterate at the call of the next major cycle (and not the next good major cycle). Since the norm monotonically decreases, it suffices to prove the lemma statement for this xt+1. Now apply Lemma 2 with x = xt and q = qt and S = St ∪qt. We have that err(xt) −err(xt+1) = ||xt||2 −||y||2 ≥∆(xt, qt)2/4Q2 = ∆(xt)2/4Q2. Now we have to argue about major cycles with exactly one minor cycle. The next observation is a useful structural result. 5 Lemma 4 (New Vertex Survives a Minor Cycle.). Consider any (not necessarily good) major cycle. Let xt, St, qt be the parameters at the beginning of this cycle, and let xt+1, St+1, qt+1 be the parameters at the beginning of the next major cycle. Then, qt ∈St+1. Proof. Clearly St+1 ⊆St ∪qt since qt is added and then maybe minor cycles remove some points from S. Suppose qt /∈St+1. Well, then St+1 ⊆St. But xt+1 is the affine minimizer of St+1 and xt is the affine minimizer of St. Since St is the larger set, we get ||xt|| ≤||xt+1||. This contradicts the strict decrease in the norm. Lemma 5 (Progress in an iteration with exactly one minor cyvle). Suppose the tth good major cycle has exactly one minor cycle. Then, err(xt) −err(xt+1) ≥∆(xt)2/8Q2. Proof. Let xt, St, qt be the parameters at the beginning of the tth good major cycle. Let y be the affine minimizer of St∪qt. Since there is one minor cycle, y /∈conv(St∪qt). Let z = θxt+(1−θ)y be the intermediate x, that is, point in the line segment [xt, y] which lies in conv(St ∪qt). Let S′ be the set after the single minor cycle is run. Since there is just one minor cycle, we get xt+1 (abusing notation once again since the next major cycle maynot be good) is the affine minimizer of S′. Let A ≜||xt||2 −||y||2. From Lemma 2, and using qt is the minimizer of x⊤ t q over all q, we have: A = ||xt||2 −||y||2 ≥∆2(xt)/4Q2 (3) Recall, z = θxt + (1 −θ)y for some θ ∈[0, 1]. Since y is the min-norm point of aff(St ∪qt), and xt ∈St, we get ||z||2 = θ2||xt||2 + (1 −θ2)||y||2. this yields: ||xt||2 −||z||2 = (1 −θ2) ||xt||2 −||y||2 = (1 −θ2)A (4) Further, recall that S′ is the set after the only minor cycle in the tth iteration is run and thus, from Lemma 4, qt ∈S′. z ∈conv(S′) by definition. And since there is only one minor cycle, xt+1 is the affine minimizer of S′. We can apply Lemma 2 with z, qt and xt+1, to get ||z||2 −||xt+1||2 ≥∆2(z, qt) 4Q2 (5) Now we lower bound ∆2(z, qt). By definition of z, we have: z⊤qt = θx⊤ t qt + (1 −θ)y⊤qt = θx⊤ t qt + (1 −θ)||y||2 where the last equality follows since y⊤qt = ||y||2 (since qt ∈St ∪qt and y is affine minimizer of St ∪qt). This gives ∆(z, qt) = ||z||2 −z⊤qt = θ2||xt||2 + (1 −θ2)||y||2 − θx⊤ t qt + (1 −θ)||y||2 = θ(||xt||2 −x⊤ t qt) −θ(1 −θ) ||xt||2 −||y||2 = θ (∆(xt) −(1 −θ)A) (6) From (4),(5), and (6), we get errt −errt+1 ≥(1 −θ2)A + θ2 (∆(xt) −(1 −θ)A)2 4Q2 (7) We need to show that the RHS is at least ∆(xt)2/8Q2. Intuitively, if θ is small (close to 0), the first term implies this using (3), and if θ is large (close to 1), then the second term implies this. The following paragraph formalizes this intuition for any θ. Now, if (1 −θ2)A > ∆(xt)2/8Q2, we are done. Therefore, we assume (1 −θ2)A ≤∆(xt)2/8Q2. In this case, using the fact that ∆(xt) ≤||xt||2 + ||xt||||qt|| ≤2Q2, we get that (1 −θ)A ≤(1 −θ2)A ≤∆(xt) · ∆(xt) 8Q2 ≤∆(xt)/4 Substituting in (7), and using (3), we get errt −errt+1 ≥ (1 −θ2)∆(xt)2 4Q2 + 9θ2∆(xt)2 64Q2 ≥∆(xt)2 8Q2 (8) This completes the proof of the lemma. Lemma 3 and Lemma 5 complete the proof of Theorem 6. 6 3.2 A Robust version of Fujishige’s Theorem In this section we prove Theorem 5 which we restate below. Theorem 5. Fix a submodular function f with base polytope Bf. Let x ∈Bf be such that ||x||2 ≤ x⊤q + ε2 for all q ∈Bf. Renumber indices such that x1 ≤· · · ≤xn. Let S = {1, 2, . . . , k},where k is smallest index satisfying (C1) xk+1 ≥0 and (C2) xk+1 −xk ≥ε/n. Then, f(S) ≤f(T) + 2nε for any subset T ⊆S. In particular, if ε = 1 4n and f is integer-valued, then S is a minimizer. Before proving the theorem, note that setting ε = 0 gives Fujishige’s theorem Theorem 3. Proof. We claim that the following inequality holds. Below, [i] := {1, . . . , i}. n−1 X i=1 (xi+1 −xi) · (f([i]) −x([i])) ≤ε2 (9) We prove this shortly. Let S and k be as defined in the theorem statement. Note that P i∈S:xi≥0 xi ≤ nε, since (C2) doesn’t hold for any index i < k with xi ≥0. Furthermore, since xk+1 −xk ≥ε/n, we get using (9), f(S) −x(S) ≤nε. Therefore, f(S) ≤P i∈S:xi<0 xi + 2nε which implies the theorem due to Theorem 2. Now we prove (9). Let z ∈Bf be the point which minimizes z⊤x. By the Greedy algorithm described in Section 2.1, we know that zi = f([i]) −f([i −1]). Next, we write x in a different basis as follows: x = Pn−1 i=1 (xi −xi+1)1[i] + xn1[n]. Here 1[i] is used as the shorthand for the vector which has 1’s in the first i coordinates and 0s everywhere else. Taking dot product with (x −z), we get ||x||2 −x⊤z = (x −z)⊤x = n−1 X i=1 (xi −xi+1) x⊤1[i] −z⊤1[i] + xn x⊤1[n] −z⊤1[n] (10) Since zi = f([i]) −f([i −1]), we get x⊤1[i] −z⊤1[i] is x([i]) −f([i]). Therefore the RHS of (10) is the LHS of (9). The LHS of (10), by the assumption of the theorem, is at most ε2 implying (9). 4 Discussion and Conclusions (a) (b) (c) Figure 1: Running time comparision of Iwata-Orlin’s (IO) method [11] vs Wolfe’s method. (a): s-t mincut function, (b) Iwata’s 3 groups function [16]. (c): Total number of iterations required by Wolfe’s method for solving s-t mincut with increasing F We have shown that the Fujishige-Wolfe algorithm solves SFM in O((n5EO + n7)F 2) time, where F is the maximum change in the value of the function on addition or deletion of an element. Although this is the first pseudopolynomial time analysis of the algorithm, we believe there is room for improvement and hope our work triggers more interest. Note that our anlaysis of the Fujishige-Wolfe algorithm is weaker than the best known method in terms of time complexity (IO method by [11]) on two counts: a) dependence on n, b) dependence on F. In contrast, we found this algorithm significantly outperforming the IO algorithm empirically – we show two plots here. In Figure 1 (a), we run both on Erdos-Renyi graphs with p = 0.8 and randomly chosen s, t nodes. In Figure 1 (b), we run both on the Iwata group functions [16] with 3 groups. Perhaps more interestingly, in Figure 1 (c), we ran the Fujishige-Wolfe algorithm on the simple path graph where s, t were the end points, and changed the capacities on the edges of the graph which changed the parameter F. As can be seen, the number of iterations of the algorithm remains constant even for exponentially increasing F. 7 References [1] Francis Bach. Convex analysis and optimization with submodular functions: a tutorial. CoRR, abs/1010.4207, 2010. 1 [2] Jack Edmonds. Matroids, submodular functions and certain polyhedra. Combinatorial Structures and Their Applications, pages 69–87, 1970. 2, 3 [3] Satoru Fujishige. Lexicographieally optimal base of a polymatroid with respect to a weight vector. Math. Oper. Res., 5:186–196, 1980. 1, 2, 3 [4] Satoru Fujishige. Submodular systems and related topics. Math. Programming Study, 1984. 2 [5] Satoru Fujishige. Submodular functions and optimization. Elsevier, 2005. [6] Satoru Fujishige, Takumi Hayashi, and Shigueo Isotani. The minimum-norm-point algorithm applied to submodular function minimization and linear programming. 2006. 2 [7] Satoru Fujishige and Shigueo Isotani. A submodular function minimization algorithm based on the minimum-norm base. Pacific Journal of Optimization, 7:3, 2011. 2 [8] Martin Gr¨otschel, L´aszl´o Lov´asz, and Alexander Schrijver. The ellipsoid method and its consequences in combinatorial optimization. Combinatorica, 1(2):169–197, 1981. 1 [9] Satoru Iwata, Lisa Fleischer, and Satoru Fujishige. A combinatorial, strongly polynomial-time algorithm for minimizing submodular functions. In STOC, pages 97–106, 2000. 1 [10] Satoru Iwata, Lisa Fleischer, and Satoru Fujishige. A combinatorial strongly polynomial algorithm for minimizing submodular functions. J. ACM, 48(4):761–777, 2001. 1 [11] Satoru Iwata and James B. Orlin. A simple combinatorial algorithm for submodular function minimization. In SODA, pages 1230–1237, 2009. 1, 2, 7 [12] Rishabh Iyer, Stefanie Jegelka, and Jeff Bilmes. Curvature and optimal algorithms for learning and minimizing submodular functions. CoRR, abs/1311.2110, 2013. 1 [13] Rishabh Iyer, Stefanie Jegelka, and Jeff Bilmes. Fast semidifferential-based submodular function optimization. In ICML (3), pages 855–863, 2013. 1 [14] Rishabh K. Iyer and Jeff A. Bilmes. Submodular optimization with submodular cover and submodular knapsack constraints. In NIPS, pages 2436–2444, 2013. 1 [15] Stefanie Jegelka, Francis Bach, and Suvrit Sra. Reflection methods for user-friendly submodular optimization. In NIPS, pages 1313–1321, 2013. 1 [16] Stefanie Jegelka, Hui Lin, and Jeff A. Bilmes. On fast approximate submodular minimization. In NIPS, pages 460–468, 2011. 1, 7 [17] Pushmeet Kohli and Philip H. S. Torr. Dynamic graph cuts and their applications in computer vision. In Computer Vision: Detection, Recognition and Reconstruction, pages 51–108. 2010. 1 [18] Andreas Krause, Ajit Paul Singh, and Carlos Guestrin. Near-optimal sensor placements in gaussian processes: Theory, efficient algorithms and empirical studies. Journal of Machine Learning Research, 9:235–284, 2008. 1 [19] Alexander Schrijver. A combinatorial algorithm minimizing submodular functions in strongly polynomial time. J. Comb. Theory, Ser. B, 80(2):346–355, 2000. 1 [20] Peter Stobbe and Andreas Krause. Efficient minimization of decomposable submodular functions. In NIPS, pages 2208–2216, 2010. 1 [21] Phillip Wolfe. Finding the nearest point in a polytope. Math. Programming, 11:128 – 149, 1976. 1, 2, 3 8
|
2014
|
189
|
5,279
|
Robust Kernel Density Estimation by Scaling and Projection in Hilbert Space Robert A. Vandermeulen Department of EECS University of Michigan Ann Arbor, MI 48109 rvdm@umich.edu Clayton D. Scott Deparment of EECS Univeristy of Michigan Ann Arbor, MI 48109 clayscot@umich.edu Abstract While robust parameter estimation has been well studied in parametric density estimation, there has been little investigation into robust density estimation in the nonparametric setting. We present a robust version of the popular kernel density estimator (KDE). As with other estimators, a robust version of the KDE is useful since sample contamination is a common issue with datasets. What “robustness” means for a nonparametric density estimate is not straightforward and is a topic we explore in this paper. To construct a robust KDE we scale the traditional KDE and project it to its nearest weighted KDE in the L2 norm. This yields a scaled and projected KDE (SPKDE). Because the squared L2 norm penalizes point-wise errors superlinearly this causes the weighted KDE to allocate more weight to high density regions. We demonstrate the robustness of the SPKDE with numerical experiments and a consistency result which shows that asymptotically the SPKDE recovers the uncontaminated density under sufficient conditions on the contamination. 1 Introduction The estimation of a probability density function (pdf) from a random sample is a ubiquitous problem in statistics. Methods for density estimation can be divided into parametric and nonparametric, depending on whether parametric models are appropriate. Nonparametric density estimators (NDEs) offer the advantage of working under more general assumptions, but they also have disadvantages with respect to their parametric counterparts. One of these disadvantages is the apparent difficulty in making NDEs robust, which is desirable when the data follow not the density of interest, but rather a contaminated version thereof. In this work we propose a robust version of the KDE, which serves as the workhorse among NDEs [11, 10]. We consider the situation where most observations come from a target density ftar but some observations are drawn from a contaminating density fcon, so our observed samples come from the density fobs = (1 −ε) ftar + εfcon. It is not known which component a given observation comes from. When considering this scenario in the infinite sample setting we would like to construct some transform that, when applied to fobs, yields ftar. We introduce a new formalism to describe transformations that “decontaminate” fobs under sufficient conditions on ftar and fcon. We focus on a specific nonparametric condition on ftar and fcon that reflects the intuition that the contamination manifests in low density regions of ftar. In the finite sample setting, we seek a NDE that converges to ftar asymptotically. Thus, we construct a weighted KDE where the kernel weights are lower in low density regions and higher in high density regions. To do this we multiply the standard KDE by a real value greater than one (scale) and then find the closest pdf to the scaled KDE in the L2 norm (project), resulting in a scaled and projected kernel density estimator (SPKDE). Because the squared L2 norm penalizes point-wise differences between functions quadratically, this causes the 1 SPKDE to draw weight from the low density areas of the KDE and move it to high density areas to get a more uniform difference to the scaled KDE. The asymptotic limit of the SPKDE is a scaled and shifted version of fobs. Given our proposed sufficient conditions on ftar and fcon, the SPKDE can asymptotically recover ftar. A different construction for a robust kernel density estimator, the aptly named “robust kernel density estimator” (RKDE), was developed by Kim & Scott [6]. In that paper the RKDE was analytically and experimentally shown to be robust, but no consistency result was presented. Vandermeulen & Scott [15] proved that a certain version of the RKDE converges to fobs. To our knowledge the convergence of the SPKDE to a transformed version of fobs, which is equal to ftar under sufficient conditions on ftar and fcon, is the first result of its type. In this paper we present a new formalism for nonparametric density estimation, necessary and sufficient conditions for decontamination, the construction of the SPKDE, and a proof of consistency. We also include experimental results applying the algorithm to benchmark datasets with comparisons to the RKDE, traditional KDE, and an alternative robust KDE implementation. Many of our results and proof techniques are novel in KDE literature. Proofs are contained in the supplemental material. 2 Nonparametric Contamination Models and Decontamination Procedures for Density Estimation What assumptions are necessary and sufficient on a target and contaminating density in order to theoretically recover the target density is a question that, to the best of our knowledge, is completely unexplored in a nonparametric setting. We will approach this problem in the infinite sample setting, where we know fobs = (1 −ε)ftar + εfcon and ε, but do not know ftar or fcon. To this end we introduce a new formalism. Let D be the set of all pdfs on Rd. We use the term contamination model to refer to any subset V ⊂D × D, i.e. a set of pairs (ftar, fcon). Let Rε : D →D be a set of transformations on D indexed by ε ∈[0, 1). We say that Rε decontaminates V if for all (ftar, fcon) ∈V and ε ∈[0, 1) we have Rε((1 −ε)ftar + εfcon) = ftar. One may wonder whether there exists some set of contaminating densities, Dcon, and a transformation, Rε, such that Rε decontaminates D × Dcon. In other words, does there exist some set of contaminating densities for which we can recover any target density? It turns out this is impossible if Dcon contains at least two elements. Proposition 1. Let Dcon ⊂D contain at least two elements. There does not exist any transformation Rε which decontaminates D × Dcon. Proof. Let f ∈D and g, g′ ∈Dcon such that g ̸= g′. Let ε ∈(0, 1 2). Clearly ftar ≜f(1−2ε)+gε 1−ε and f ′ tar ≜f(1−2ε)+εg′ 1−ε are both elements of D. Note that (1 −ε)ftar + εg′ = (1 −ε)f ′ tar + εg. In order for Rε to decontaminate D with respect to Dcon, we need Rε ((1 −ε)ftar + εg′) = ftar and Rε ((1 −ε)f ′ tar + εg) = f ′ tar, which is impossible since ftar ̸= f ′ tar. This proposition imposes significant limitations on what contamination models can be decontaminated. For example, suppose we know that fcon is Gaussian with known covariance matrix and unknown mean. Proposition 1 says we cannot design Rε so that it can decontaminate (1−ε)ftar+εfcon for all ftar ∈D. In other words, it is impossible to design an algorithm capable of removing Gaussian contamination (for example) from arbitrary target densities. Furthermore, if Rε decontaminates V and V is fully nonparametric (i.e. for all f ∈D there exists some f ′ ∈D such that (f, f ′) ∈V) then for each (ftar, fcon) pair, fcon must satisfy some properties which depend on ftar. 2.1 Proposed Contamination Model For a function f : Rd →R let supp(f) denote the support of f. We introduce the following contamination assumption: 2 Assumption A. For the pair (ftar, fcon), there exists u such that fcon(x) = u for almost all (in the Lebesgue sense) x ∈supp(ftar) and fcon(x′) ≤u for almost all x′ /∈supp(ftar). See Figure 1 for an example of a density satisfying this assumption. Because fcon must be uniform over the support of ftar a consequence of Assumption A is that supp(ftar) has finite Lebesgue measure. Let VA be the contamination model containing all pairs of densities which satisfy Assumption A. Note that S (ftar,fcon)∈VA ftar is exactly all densities whose support has finite Lebesgue measure, which includes all densities with compact support. The uniformity assumption on fcon is a common “noninformative” assumption on the contamination. Furthermore, this assumption is supported by connections to one-class classification. In that problem, only one class (corresponding to our ftar) is observed for training, but the testing data is drawn from fobs and must be classified. The dominant paradigm for nonparametric one-class classification is to estimate a level set of ftar from the one observed training class [14, 7, 13, 16, 12, 9], and classify test data according to that level set. Yet level sets only yield optimal classifiers (i.e. likelihood ratio tests) under the uniformity assumption on fcon, so that these methods are implicitly adopting this assumption. Furthermore, a uniform contamination prior has been shown to optimize the worst-case detection rate among all choices for the unknown contamination density [5]. Finally, our experiments demonstrate that the SPKDE works well in practice, even when Assumption A is significantly violated. 2.2 Decontamination Procedure Under Assumption A ftar is present in fobs and its shape is left unmodified (up to a multiplicative factor) by fcon. To recover ftar it is necessary to first scale fobs by β = 1 1−ε yielding 1 1 −ε ((1 −ε)ftar + εfcon) = ftar + ε 1 −εfcon. (1) After scaling we would like to slice off ε 1−εfcon from the bottom of ftar + ε 1−εfcon. This transform is achieved by max 0, ftar + ε 1 −εfcon −α , (2) where α is set such that 2 is a pdf (which in this case is achieved with α = r ε 1−ε). We will now show that this transform is well defined in a general sense. Let f be a pdf and let gβ,α = max {0, βf (·) −α} where the max is defined pointwise. The following lemma shows that it is possible to slice off the bottom of any scaled pdf to get a transformed pdf and that the transformed pdf is unique. Lemma 1. For fixed β > 1 there exists a unique α′ > 0 such that ∥gβ,α′∥L1 = 1. Figure 2 demonstrates this transformation applied to a pdf. We define the following transform RA ε : D →D where RA ε (f) = max n 1 1−εf(·) −α, 0 o where α is such that RA ε (f) is a pdf. εfcon (1-ε)ftar Figure 1: Density with contamination satisfying Assumption A Proposition 2. RA ε decontaminates VA. The proof of this proposition is an intermediate step for the proof for Theorem 2. For any two subsets of V, V′ ⊂ D × D, Rε decontaminates V and V′ iff Rε decontaminates V S V′. Because of this, every decontaminating transform has a maximal set which it can decontaminate. Assumption A is both sufficient and necessary for decontamination by RA ε , i.e. the set VA is maximal. Proposition 3. Let {(q, q′)} ∈D × D and (q, q′) /∈VA. RA ε cannot decontaminate {(q, q′)}. The proof of this proposition is in the supplementary material. 2.3 Other Possible Contamination Models 3 1-1/β Original Density Scaled Density Shifted to pdf β-1 Figure 2: Infinite sample SPKDE transform. Arrows indicate the area under the line. The model described previously is just one of many possible models. An obvious approach to robust kernel density estimation is to use an anomaly detection algorithm and construct the KDE using only nonanomalous samples. We will investigate this model under a couple of anomaly detection schemes and describe their properties. One of the most common methods for anomaly detection is the level set method. For a probability measure µ this method attempts to find the set S with smallest Lebesgue measure such that µ(S) is above some threshold, t, and declares samples outside of that set as being anomalous. For a density f this is equivalent to finding λ such that R {x|f(x)≥λ} f(y)dy = t and declaring samples were f(X) < λ as being anomalous. Let X1, . . . , Xn be iid samples from fobs. Using the level set method for a robust KDE, we would construct a density bfobs which is an estimate of fobs. Next we would select some threshold λ > 0 and declare a sample, Xi, as being anomalous if bfobs(Xi) < λ. Finally we would construct a KDE using the non-anomalous samples. Let χ{·} be the indicator function. Applying this method in the infinite sample situation, i.e. bfobs = fobs, would cause our non-anomalous samples to come from the density p(x) = fobs(x)χ{fobs(x)>λ} τ where τ = R χ{f(y)>λ}f(y)dy. See Figure 3. Perfect recovery of ftar using this method requires εfcon(x) ≤ ftar(x) (1 −ε) for all x and that fcon and ftar have disjoint supports. The first assumption means that this density estimator can only recover ftar if it has a drop off on the boundary of its support, whereas Assumption A only requires that ftar have finite support. See the last diagram in Figure 3. Although these assumptions may be reasonable in certain situations, we find them less palatable than Assumption A. We also evaluate this approach experimentally later and find that it performs poorly. λ Original Density Threshold at λ Set density under threshold to 0 Normalize to integrate to 1 Figure 3: Infinite sample version of the level set rejection KDE Another approach based on anomaly detection would be to find the connected components of fobs and declare those that are, in some sense, small as being anomalous. A “small” connected component may be one that integrates to a small value, or which has a small mode. Unfortunately this approach also assumes that ftar and fcon have disjoint supports. There are also computational issues with this anomaly detection scheme; finding connected components, finding modes, and numerical integration are computationally difficult. To some degree, RA ε actually achieves the objectives of the previous two robust KDEs. For the first model, the RA ε does indeed set those regions of the pdf that are below some threshold to zero. For the second, if the magnitude of the level at which we choose to slice off the bottom of the contaminated density is larger than the mode of the anomalous component then the anomalous component will be eliminated. 3 Scaled Projection Kernel Density Estimator Here we consider approximating RA ε in a finite sample situation. Let f ∈L2 Rd be a pdf and X1, . . . , Xn be iid samples from f. Let kσ (x, x′) be a radial smoothing kernel with bandwidth σ such that kσ (x, x′) = σ−dq (∥x −x′∥2 /σ), where q (∥·∥2) ∈L2 Rd and is a pdf. The classic kernel density estimator is: ¯f n σ := 1 n n X 1 kσ (·, Xi) . 4 In practice ε is usually not known and Assumption A is violated. Because of this we will scale our density by β > 1 rather than 1 1−ε. For a density f define Qβ(f) ≜max {βf (·) −α, 0} , where α = α(β) is set such that the RHS is a pdf. β can be used to tune robustness with larger β corresponding to more robustness (setting β to 1 in all the following transforms simply yields the KDE). Given a KDE we would ideally like to apply Qβ directly and search over α until max β ¯f n σ (·) −α, 0 integrates to 1. Such an estimate requires multidimensional numerical integration and is not computationally tractable. The SPKDE is an alternative approach that always yields a density and manifests the transformed density in its asymptotic limit. We now introduce the construction of the SPKDE. Let Dn σ be the convex hull of kσ (·, X1) , . . . , kσ (·, Xn) (the space of weighted kernel density estimators). The SPKDE is defined as f n σ,β := arg min g∈Dn σ
β ¯f n σ −g
L2 , which is guaranteed to have a unique minimizer since Dn σ is closed and convex and we are projecting in a Hilbert space ([1] Theorem 3.14). If we represent f n σ,β in the form f n σ,β = n X 1 aikσ (·, Xi) , then the minimization problem is a quadratic program over the vector a = [a1, . . . , an]T , with a restricted to the probabilistic simplex, ∆n. Let G be the Gram matrix of kσ (·, X1) , . . . , kσ (·, Xn), that is Gij = ⟨kσ (·, Xi) , kσ (·, Xj)⟩L2 = Z kσ (x, Xi) kσ (x, Xj) dx. Let 1 be the ones vector and b = G1 β n, then the quadratic program is min a∈∆n aT Ga −2bT a. Since G is a Gram matrix, and therefore positive-semidefinite, this quadratic program is convex. Furthermore, the integral defining Gij can be computed in closed form for many kernels of interest. For example for the Gaussian kernel kσ (x, x′) = 2πσ2−d 2 exp −∥x −x′∥2 2σ2 ! =⇒Gij = k√ 2σ(Xi, Xj), and for the Cauchy kernel [2] kσ (x, x′) = Γ 1+d 2 π(d+1)/2 · σd 1 + ∥x −x′∥2 σ2 !−1+d 2 =⇒Gij = k2σ(Xi, Xj). We now present some results on the asymptotic behavior of the SPKDE. Let D be the set of all pdfs in L2 Rd . The infinite sample version of the SPKDE is f ′ β = arg min h∈D ∥βf −h∥2 L2 . It is worth noting that projection operators in Hilbert space, like the one above, are known to be well defined if the convex set we are projecting onto is closed and convex. D is not closed in L2 Rd , but this turns out not to be an issue because of the form of βf. For details see the proof of Lemma 2 in the supplemental material. Lemma 2. f ′ β = max {βf (·) −α, 0} where α is set such that max {βf (·) −α, 0} is a pdf. 5 Given the same rate on bandwidth necessary for consistency of the traditional KDE, the SPKDE converges to its infinite sample version in its asymptotic limit. Theorem 1. Let f ∈L2 Rd . If n →∞and σ →0 with nσd →∞then
f n σ,β −f ′ β
L2 p→0. Because f n σ,β is a sequence of pdfs and f ′ β ∈L2 Rd , it is possible to show L2 convergence implies L1 convergence. Corollary 1. Given the conditions in the previous theorem statement,
f n σ,β −f ′ β
L1 p→0. To summarize, the SPKDE converges to a transformed version of f. In the next section we will show that under Assumption A and with β = 1 1−ε, the SPKDE converges to ftar. 3.1 SPKDE Decontamination Let ftar ∈L2 Rd be a pdf having support with finite Lebesgue measure and let ftar and fcon satisfy Assumption A. Let X1, X2, . . . , Xn be iid samples from fobs = (1 −ε) ftar + εfcon with ε ∈[0, 1). Finally let f n σ,β be the SPKDE constructed from X1, . . . , Xn, having bandwidth σ and robustness parameter β. We have Theorem 2. Let β = 1 1−ε. If n →∞and σ →0 with nσd →∞then
f n σ,β −ftar
L1 p→0. To our knowledge this result is the first of its kind, wherein a nonparametric density estimator is able to asymptotically recover the underlying density in the presence of contaminated data. 4 Experiments For all of the experiments optimization was performed using projected gradient descent. The projection onto the probabilistic simplex was done using the algorithm developed in [4] (which was actually originally discovered a few decades ago [3, 8]). 4.1 Synthetic Data To show that the SPKDE’s theoretical properties are manifested in practice we conducted an idealized experiment where the contamination is uniform and the contamination proportion is known. Figure 4 exhibits the ability of the SPKDE to compensate for uniform noise. Samples for the density estimator came from a mixture of the “Target” density with a uniform contamination on [−2, 2], sampling from the contamination with probability ε = 0.2. This experiment used 500 samples and the robustness parameter β was set to 1 1−ε = 5 4 (the value for perfect asymptotic decontamination). The SPKDE performs well in this situation and yields a scaled and shifted version of the standard KDE. This scale and shift is especially evident in the preservation of the bump on the right hand side of Figure 4. 4.2 Datasets In our remaining experiments we investigate two performance metrics for different amounts of contamination. We perform our experiments on 12 classification datasets (names given in the supplemental material) where the 0 label is used as the target density and the 1 label is the anomalous contamination. This experimental setup does not satisfy Assumption A. The training datasets are constructed with n0 samples from label 0 and ε 1−εn0 samples from label 1, thus making an ε proportion of our samples come from the contaminating density. For our experiments we use the values ε = 0, 0.05, 0.1, 0.15, 0.20, 0.25, 0.30. Given some dataset we are interested in how well our density estimators bf estimate the density of the 0 class of our dataset, ftar. Each test is performed on 15 permutations of the dataset. The experimental setup here is similar to the setup in Kim & Scott [6], the most significant difference being that σ is set differently. 4.3 Performance Criteria 6 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 KDE SPKDE Target Figure 4: KDE and SPKDE in the presence of uniform noise First we investigate the Kullback-Leibler (KL) divergence DKL bf||f0 = Z bf (x) log bf (x) f0 (x) ! dx. This KL divergence is large when bf estimates f0 to have mass where it does not. For example, in our context, bf makes mistakes because of outlying contamination. We estimate this KL divergence as follows. Since we do not have access to f0, it is estimated from the testing sample using a KDE, ef0. The bandwidth for ef0 is set using the testing data with a LOOCV line search minimizing DKL f0|| ef0 , which is described in more detail below. We then approximate the integral using a sample mean by generating samples from bf, {x′ i}n′ 1 and using the estimate DKL bf||f0 ≈1 n′ n′ X 1 log bf (x′ i) ef0 (x′ i) ! . The number of generated samples n′ is set to double the number of training samples. Since KL divergence isn’t symmetric we also investigate DKL f0|| bf = Z f0 (x) log f0 (x) bf (x) ! dx = C − Z f0 (y) log bf (y) dy, where C is a constant not depending on bf. This KL divergence is large when f0 has mass where bf does not. The final term is easy to estimate using expectation. Let {x′′ i }n′′ 1 be testing samples from f0 (not used for training). The following is a reasonable approximation − Z f0 (y) log bf (y) dy ≈−1 n′′ n′′ X 1 log bf (x′′ i ) . For a given performance metric and contamination amount, we compare the mean performance of two density estimators across datasets using the Wilcoxon signed rank test [17]. Given N datasets we first rank the datasets according to the absolute difference between performance criterion, with hi being the rank of the ith dataset. For example if the jth dataset has the largest absolute difference we set hj = N and if the kth dataset has the smallest absolute difference we set hk = 1. We let R1 be the sum of the his where method one’s metric is greater than metric two’s and R2 be the sum of the his where method two’s metric is larger. The test statistic is min(R1, R2), which we do not report. Instead we report R1 and R2 and the p-value that the two methods do not perform the same on average. Ri < Rj is indicative of method i performing better than method j. 4.4 Methods The data were preprocessed by scaling to fit in the unit cube. This scaling technique was chosen over whitening because of issues with singular covariance matrices. The Gaussian kernel was used for all density estimates. For each permutation of each dataset, the bandwidth parameter is set using the training data with a LOOCV line search minimizing DKL fobs|| bf , where bf is the KDE based on the contaminated data and fobs is the observed density. This metric was used in order to maximize the performance of the traditional KDE in KL divergence metrics. For the SPKDE the parameter β was chosen to be 2 for all experiments. This choice of β is based on a few preliminary experiments 7 Table 1: Wilcoxon signed rank test results Wilcoxon Test Applied to DKL bf||f0 Wilcoxon Test Applied to DKL f0|| bf ε 0 0.05 0.1 0.15 0.2 0.25 0.3 0 0.05 0.1 0.15 0.2 0.25 0.3 SPKDE 5 0 1 2 0 0 0 37 30 27 21 17 16 17 KDE 73 78 77 76 78 78 78 41 48 51 57 61 62 61 p-value .0049 5e-4 1e-3 .0015 5e-4 5e-4 5e-4 .91 .52 .38 .18 .092 .078 .092 SPKDE 53 59 58 67 63 61 63 14 14 14 10 10 12 12 RKDE 25 19 20 11 15 17 15 64 64 64 68 68 66 66 p-value 0.31 0.13 0.15 .027 .064 .092 .064 .052 .052 .052 .021 .021 .034 .034 SPKDE 0 0 1 1 0 2 0 29 21 19 15 13 9 11 rejKDE 78 78 77 77 78 76 78 49 57 59 63 65 69 67 p-value 5e-4 5e-4 1e-3 1e-3 5e-4 .0015 5e-4 .47 .18 .13 .064 .043 .016 .027 for which it yielded good results over various sample contamination amounts. The construction of the RKDE follows exactly the methods outlined in the “Experiments” section of Kim & Scott [6]. It is worth noting that the RKDE depends on the loss function used and that the Hampel loss used in these experiments very aggressively suppresses the kernel weights on the tails. Because of this we expect that RKDE performs well on the DKL bf||f0 metric. We also compare the SPKDE to a kernel density estimator constructed from samples declared non-anomalous by a level set anomaly detection as described in Section 2.3. To do this we first construct the classic KDE, ¯f n σ and then reject those samples in the lower 10th percentile of ¯f n σ (Xi). Those samples not rejected are used in a new KDE, the “rejKDE” using the same σ parameter. 4.5 Results We present the results of the Wilcoxon signed rank tests in Table 1. Experimental results for each dataset can be found in the supplemental material. From the results it is clear that the SPKDE is effective at compensating for contamination in the DKL bf||f0 metric, albeit not quite as well as the RKDE. The main advantage of the SPKDE over the RKDE is that it significantly outperforms the RKDE in the DKL f0|| bf metric. The rejKDE performs significantly worse than the SPKDE on almost every experiment. Remarkably the SPKDE outperforms the KDE in the situation with no contamination (ε = 0) for both performance metrics. 5 Conclusion Robustness in the setting of nonparametric density estimation is a topic that has received little attention despite extensive study of robustness in the parametric setting. In this paper we introduced a robust version of the KDE, the SPKDE, and developed a new formalism for analysis of robust density estimation. With this new formalism we proposed a contamination model and decontaminating transform to recover a target density in the presence of noise. The contamination model allows that the target and contaminating densities have overlapping support and that the basic shape of the target density is not modified by the contaminating density. The proposed transform is computationally prohibitive to apply directly to the finite sample KDE and the SPKDE is used to approximate the transform. The SPKDE was shown to asymptotically converge to the desired transform. Experiments have shown that the SPKDE is more effective than the RKDE at minimizing DKL f0|| bf . Furthermore the p-values for these experiments were smaller than the p-values for the DKL bf||f0 experiments where the RKDE outperforms the SPKDE. Acknowledgements This work support in part by NSF Awards 0953135, 1047871, 1217880, 1422157. We would also like to thank Samuel Brodkey for his assistance with the simulation code. 8 References [1] H.H. Bauschke and P.L. Combettes. Convex analysis and monotone operator theory in Hilbert spaces. CMS Books in Mathematics, Ouvrages de math´ematiques de la SMC. Springer New York, 2011. [2] D.A. Berry, K.M. Chaloner, J.K. Geweke, and A. Zellner. Bayesian Analysis in Statistics and Econometrics: Essays in Honor of Arnold Zellner. A Wiley Interscience publication. Wiley, 1996. [3] Peter Brucker. An o(n) algorithm for quadratic knapsack problems. Operations Research Letters, 3(3):163 – 166, 1984. [4] John C. Duchi, Shai Shalev-Shwartz, Yoram Singer, and Tushar Chandra. Efficient projections onto the l1-ball for learning in high dimensions. In ICML, pages 272–279, 2008. [5] R. El-Yaniv and M. Nisenson. Optimal single-class classification strategies. In B. Sch¨olkopf, J. Platt, and T. Hoffman, editors, Adv. in Neural Inform. Proc. Systems 19. MIT Press, Cambridge, MA, 2007. [6] J. Kim and C. Scott. Robust kernel density estimation. J. Machine Learning Res., 13:2529– 2565, 2012. [7] G. Lanckriet, L. El Ghaoui, and M. I. Jordan. Robust novelty detection with single-class mpm. In S. Thrun S. Becker and K. Obermayer, editors, Advances in Neural Information Processing Systems 15, pages 905–912. MIT Press, Cambridge, MA, 2003. [8] P.M. Pardalos and N. Kovoor. An algorithm for a singly constrained class of quadratic programs subject to upper and lower bounds. Mathematical Programming, 46(1-3):321–328, 1990. [9] B. Sch¨olkopf, J. Platt, J. Shawe-Taylor, A. Smola, and R. Williamson. Estimating the support of a high-dimensional distribution. Neural Computation, 13(7):1443–1472, 2001. [10] D. W. Scott. Multivariate Density Estimation. Wiley, New York, 1992. [11] B. W. Silverman. Density Estimation for Statistics and Data Analysis. Chapman and Hall, London, 1986. [12] K. Sricharan and A. Hero. Efficient anomaly detection using bipartite k-nn graphs. In J. ShaweTaylor, R.S. Zemel, P. Bartlett, F.C.N. Pereira, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 24, pages 478–486. 2011. [13] I. Steinwart, D. Hush, and C. Scovel. A classification framework for anomaly detection. JMLR, 6:211–232, 2005. [14] J. Theiler and D. M. Cai. Resampling approach for anomaly detection in multispectral images. In Proc. SPIE, volume 5093, pages 230–240, 2003. [15] R. Vandermeulen and C. Scott. Consistency of robust kernel density estimators. COLT, 30, 2013. [16] R. Vert and J.-P. Vert. Consistency and convergence rates of one-class SVM and related algorithms. JMLR, pages 817–854, 2006. [17] F. Wilcoxon. Individual comparisons by ranking methods. Biometrics Bulletin, 1(6):80–83, 1945. 9
|
2014
|
19
|
5,280
|
Online and Stochastic Gradient Methods for Non-decomposable Loss Functions Purushottam Kar∗ Harikrishna Narasimhan† Prateek Jain∗ ∗Microsoft Research, INDIA †Indian Institute of Science, Bangalore, INDIA {t-purkar,prajain}@microsoft.com, harikrishna@csa.iisc.ernet.in Abstract Modern applications in sensitive domains such as biometrics and medicine frequently require the use of non-decomposable loss functions such as precision@k, F-measure etc. Compared to point loss functions such as hinge-loss, these offer much more fine grained control over prediction, but at the same time present novel challenges in terms of algorithm design and analysis. In this work we initiate a study of online learning techniques for such non-decomposable loss functions with an aim to enable incremental learning as well as design scalable solvers for batch problems. To this end, we propose an online learning framework for such loss functions. Our model enjoys several nice properties, chief amongst them being the existence of efficient online learning algorithms with sublinear regret and online to batch conversion bounds. Our model is a provable extension of existing online learning models for point loss functions. We instantiate two popular losses, Prec@k and pAUC, in our model and prove sublinear regret bounds for both of them. Our proofs require a novel structural lemma over ranked lists which may be of independent interest. We then develop scalable stochastic gradient descent solvers for non-decomposable loss functions. We show that for a large family of loss functions satisfying a certain uniform convergence property (that includes Prec@k, pAUC, and F-measure), our methods provably converge to the empirical risk minimizer. Such uniform convergence results were not known for these losses and we establish these using novel proof techniques. We then use extensive experimentation on real life and benchmark datasets to establish that our method can be orders of magnitude faster than a recently proposed cutting plane method. 1 Introduction Modern learning applications frequently require a level of fine-grained control over prediction performance that is not offered by traditional “per-point” performance measures such as hinge loss. Examples include datasets with mild to severe label imbalance such as spam classification wherein positive instances (spam emails) constitute a tiny fraction of the available data, and learning tasks such as those in medical diagnosis which make it imperative for learning algorithms to be sensitive to class imbalances. Other popular examples include ranking tasks where precision in the top ranked results is valued more than overall precision/recall characteristics. The performance measures of choice in these situations are those that evaluate algorithms over the entire dataset in a holistic manner. Consequently, these measures are frequently non-decomposable over data points. More specifically, for these measures, the loss on a set of points cannot be expressed as the sum of losses on individual data points (unlike hinge loss, for example). Popular examples of such measures include F-measure, Precision@k, (partial) area under the ROC curve etc. Despite their success in these domains, non-decomposable loss functions are not nearly as well understood as their decomposable counterparts. The study of point loss functions has led to a deep 1 understanding about their behavior in batch and online settings and tight characterizations of their generalization abilities. The same cannot be said for most non-decomposable losses. For instance, in the popular online learning model, it is difficult to even instantiate a non-decomposable loss function as defining the per-step penalty itself becomes a challenge. 1.1 Our Contributions Our first main contribution is a framework for online learning with non-decomposable loss functions. The main hurdle in this task is a proper definition of instantaneous penalties for non-decomposable losses. Instead of resorting to canonical definitions, we set up our framework in a principled way that fulfills the objectives of an online model. Our framework has a very desirable characteristic that allows it to recover existing online learning models when instantiated with point loss functions. Our framework also admits online-to-batch conversion bounds. We then propose an efficient Follow-the-Regularized-Leader [1] algorithm within our framework. We show that for loss functions that satisfy a generic “stability” condition, our algorithm is able to offer vanishing O 1 √ T regret. Next, we instantiate within our framework, convex surrogates for two popular performances measures namely, Precision at k (Prec@k) and partial area under the ROC curve (pAUC) [2] and show, via a stability analysis, that we do indeed achieve sublinear regret bounds for these loss functions. Our stability proofs involve a structural lemma on sorted lists of inner products which proves Lipschitz continuity properties for measures on such lists (see Lemma 2) and might be useful for analyzing non-decomposable loss functions in general. A key property of online learning methods is their applicability in designing solvers for offline/batch problems. With this goal in mind, we design a stochastic gradient-based solver for non-decomposable loss functions. Our methods apply to a wide family of loss functions (including Prec@k, pAUC and F-measure) that were introduced in [3] and have been widely adopted [4, 5, 6] in the literature. We design several variants of our method and show that our methods provably converge to the empirical risk minimizer of the learning instance at hand. Our proofs involve uniform convergence-style results which were not known for the loss functions we study and require novel techniques, in particular the structural lemma mentioned above. Finally, we conduct extensive experiments on real life and benchmark datasets with pAUC and Prec@k as performance measures. We compare our methods to state-of-the-art methods that are based on cutting plane techniques [7]. The results establish that our methods can be significantly faster, all the while offering comparable or higher accuracy values. For example, on a KDD 2008 challenge dataset, our method was able to achieve a pAUC value of 64.8% within 30ms whereas it took the cutting plane method more than 1.2 seconds to achieve a comparable performance. 1.2 Related Work Non-decomposable loss functions such as Prec@k, (partial) AUC, F-measure etc, owing to their demonstrated ability to give better performance in situations with label imbalance etc, have generated significant interest within the learning community. From their role in early works as indicators of performance on imbalanced datasets [8], their importance has risen to a point where they have become the learning objectives themselves. Due to their complexity, methods that try to indirectly optimize these measures are very common e.g. [9], [10] and [11] who study the F-measure. However, such methods frequently seek to learn a complex probabilistic model, a task arguably harder than the one at hand itself. On the other hand are algorithms that perform optimization directly via structured losses. Starting from the seminal work of [3], this method has received a lot of interest for measures such as the F-measure [3], average precision [4], pAUC [7] and various ranking losses [5, 6]. These formulations typically use cutting plane methods to design dual solvers. We note that the learning and game theory communities are also interested in non-additive notions of regret and utility. In particular [12] provides a generic framework for online learning with nonadditive notions of regret with a focus on showing regret bounds for mixed strategies in a variety of problems. However, even polynomial time implementation of their strategies is difficult in general. Our focus, on the other hand, is on developing efficient online algorithms that can be used to solve large scale batch problems. Moreover, it is not clear how (if at all) can the loss functions considered here (such as Prec@k) be instantiated in their framework. 2 Recently, online learning for AUC maximization has received some attention [13, 14]. Although AUC is not a point loss function, it still decomposes over pairs of points in a dataset, a fact that [13] and [14] crucially use. The loss functions in this paper do not exhibit any such decomposability. 2 Problem Formulation Let x1:t := {x1, . . . , xt}, xi ∈Rd and y1:t := {y1, . . . , yt}, yi ∈{−1, 1} be the observed data points and true binary labels. We will use by1:t := {by1, . . . , byt}, byi ∈R to denote the predictions of a learning algorithm. We shall, for sake of simplicity, restrict ourselves to linear predictors byi = w⊤xi for parameter vectors w ∈Rd. A performance measure P : {−1, 1}t × Rt →R+ shall be used to evaluate the the predictions of the learning algorithm against the true labels. Our focus shall be on non-decomposable performance measures such as Prec@k, partial AUC etc. Since these measures are typically non-convex, convex surrogate loss functions are used instead (we will use the terms loss function and performance measure interchangeably). A popular technique for constructing such loss functions is the structural SVM formulation [3] given below. For simplicity, we shall drop mention of the training points and use the notation ℓP(w) := ℓP(x1:T , y1:T , w). ℓP(w) = max ¯y∈{−1,+1}T T X i=1 (¯yi −yi)x⊤ i w −P(¯y, y). (1) Precision@k. The Prec@k measure ranks the data points in order of the predicted scores byi and then returns the number of true positives in the top ranked positions. This is valuable in situations where there are very few positives. To formalize this, for any predictor w and set of points x1:t, define S(x, w) := {j : w⊤x > w⊤xj} to be the set of points which w ranks above x. Then define Tβ,t(x, w) = 1, if |S(x, w)| < ⌈βt⌉, 0, otherwise. (2) i.e. Tβ,t(x, w) is non-zero iff x is in the top-β fraction of the set. Then we define1 Prec@k(w) := X j:Tk,t(xj,w)=1 I [yj = 1] . The structural surrogate for this measure is then calculated as 2 ℓPrec@k(w) = max ¯y∈{−1,+1}t P i(¯yi+1)=2kt t X i=1 (¯yi −yi)xT i w − t X i=1 yi¯yi. (3) Partial AUC. This measures the area under the ROC curve with the false positive rate restricted to the range [0, β]. This is in contrast to AUC that considers the entire range [0, 1] of false positive rates. pAUC is useful in medical applications such as cancer detection where a small false positive rate is desirable. Let us extend notation to use the indicator T− β,t(x, w) to select the top β fraction of the negatively labeled points i.e. T− β,t(x, w) = 1 iff j : yj < 0, w⊤x > w⊤xj ≤⌈βt−⌉where t−is the number of negatives. Then we define pAUC(w) = X i:yi>0 X j:yj<0 T− β,t(xj, w) · I[x⊤ i w ≥x⊤ j w]. (4) Let φ : R →R+ be any convex, monotone, Lipschitz, classification surrogate. Then we can obtain convex surrogates for pAUC(w) by replacing the indicator functions above with φ(·). ℓpAUC(w) = X i:yi>0 X j:yj<0 T− β,t(xj, w) · φ(x⊤ i w −x⊤ j w), (5) It can be shown [7, Theorem 4] that the structural surrogate for pAUC is equivalent to (5) with φ(c) = max(0, 1 −c), the hinge loss function. In the next section we will develop an online learning framework for non-decomposable performance measures and instantiate loss functions such as ℓPrec@k and ℓpAUC in our framework. Then in Section 4, we will develop stochastic gradient methods for non-decomposable loss functions and prove error bounds for the same. There we will focus on a much larger family of loss functions including Prec@k, pAUC and F-measure. 1An equivalent definition considers k to be the number of top ranked points instead. 2[3] uses a slightly modified, but equivalent, definition that considers labels to be Boolean. 3 3 Online Learning with Non-decomposable Loss Functions We now present our online learning framework for non-decomposable loss functions. Traditional online learning takes place in several rounds, in each of which the player proposes some wt ∈W while the adversary responds with a penalty function Lt : W →R and a loss Lt(wt) is incurred. The goal is to minimize the regret i.e. PT t=1 Lt(wt) −arg minw∈W PT t=1 Lt(w). For point loss functions, the instantaneous penalty Lt(·) is encoded using a data point (xt, yt) ∈Rd × {−1, 1} as Lt(w) = ℓP(xt, yt, w). However, for (surrogates of) non-decomposable loss functions such as ℓpAUC and ℓPrec@k the definition of instantaneous penalty itself is not clear and remains a challenge. To guide us in this process we turn to some properties of standard online learning frameworks. For point losses, we note that the best solution in hindsight is also the batch optimal solution. This is equivalent to the condition arg minw∈W PT t=1 Lt(w) = arg minw∈W ℓP(x1:T , y1:T , w) for nondecomposable losses. Also, since the batch optimal solution is agnostic to the ordering of points, we should expect PT t=1 Lt(w) to be invariant to permutations within the stream. By pruning away several naive definitions of Lt using these requirements, we arrive at the following definition: Lt(w) = ℓP(x1:t, y1:t, w) −ℓP(x1:(t−1), y1:(t−1), w). (6) It turns out that the above is a very natural penalty function as it measures the amount of “extra” penalty incurred due to the inclusion of xt into the set of points. It can be readily verified that PT t=1 Lt(w) = ℓP(x1:T , y1:T , w) as required. Also, this penalty function seamlessly generalizes online learning frameworks since for point losses, we have ℓP(x1:t, y1:t, w) = Pt i=1 ℓP(xi, yi, w) and thus Lt(w) = ℓP(xt, yt, w). We note that our framework also recovers the model for online AUC maximization used in [13] and [14]. The notion of regret corresponding to this penalty is R(T) = 1 T T X t=1 Lt(wt) −arg min w∈W 1 T ℓP(x1:T , y1:T , w). We note that Lt, being the difference of two loss functions, is non-convex in general and thus, standard online convex programming regret bounds cannot be applied in our framework. Interestingly, as we show below, by exploiting structural properties of our penalty function, we can still get efficient low-regret learning algorithms, as well as online-to-batch conversion bounds in our framework. 3.1 Low Regret Online Learning We propose an efficient Follow-the-Regularized-Leader (FTRL) style algorithm in our framework. Let w1 = arg minw∈W ∥w∥2 2 and consider the following update: wt+1 = arg min w∈W t X t=1 Lt(w) + η 2∥w∥2 2 = arg min w∈W ℓP(x1:t, y1:t, w) + η 2∥w∥2 2 (FTRL) We would like to stress that despite the non-convexity of Lt, the FTRL objective is strongly convex if ℓP is convex and thus the update can be implemented efficiently by solving a regularized batch problem on x1:t. We now present our regret bound analysis for the FTRL update given above. Theorem 1. Let ℓP(·, w) be a convex loss function and W ⊆Rd be a convex set. Assume w.l.o.g. ∥xt∥2 ≤1, ∀t. Also, for the penalty function Lt in (6), let |Lt(w) −Lt(w′)| ≤Gt · ∥w −w′∥2, for all t and all w, w′ ∈W, for some Gt > 0. Suppose we use the update step given in ((FTRL)) to obtain wt+1, 0 ≤t ≤T −1. Then for all w∗, we have 1 T T X t=1 Lt(wt) ≤1 T ℓP(x1:T , y1:T , w∗) + ∥w∗∥2 q 2 PT t=1 G2 t T . See Appendix A for a proof. The above result requires the penalty function Lt to be Lipschitz continuous i.e. be “stable” w.r.t. w. Establishing this for point losses such as hinge loss is relatively straightforward. However, the same becomes non-trivial for non-decomposable loss functions as 4 Lt is now the difference of two loss functions, both of which involve Ω(t) data points. A naive argument would thus, only be able to show Gt ≤O(t) which would yield vacuous regret bounds. Instead, we now show that for the surrogate loss functions for Prec@k and pAUC, this Lipschitz continuity property does indeed hold. Our proofs crucially use a structural lemma given below that shows that sorted lists of inner products are Lipschitz at each fixed position. Lemma 2 (Structural Lemma). Let x1, . . . , xt be t points with ∥xi∥2 ≤1 ∀t. Let w, w′ ∈W be any two vectors. Let zi = ⟨w, xi⟩−ci and z′ i = ⟨w′, xi⟩−ci, where ci ∈R are constants independent of w, w′. Also, let {i1, . . . , it} and {j1, . . . , jt} be ordering of indices such that zi1 ≥zi2 ≥· · · ≥ zit and z′ j1 ≥z′ j2 ≥· · · ≥z′ jt. Then for any 1-Lipschitz increasing function g : R →R (i.e. |g(u) −g(v)| ≤|u −v| and u ≤v ⇔g(u) ≤g(v)), we have, ∀k |g(zik)−g(z′ jk)| ≤3∥w −w′∥2. See Appendix B for a proof. Using this lemma we can show that the Lipschitz constant for ℓPrec@k is bounded by Gt ≤8 which gives us a O q 1 T regret bound for Prec@k (see Appendix C for the proof). In Appendix D, we show that the same technique can be used to prove a stability result for the structural SVM surrogate of the Precision-Recall Break Even Point (PRBEP) performance measure [3] as well. The case of pAUC is handled similarly. However, since pAUC discriminates between positives and negatives, our previous analysis cannot be applied directly. Nevertheless, we can obtain the following regret bound for pAUC (a proof will appear in the full version of the paper). Theorem 3. Let T+ and T−resp. be the number of positive and negative points in the stream and let wt+1, 0 ≤t ≤T −1 be obtained using the FTRL algorithm ((FTRL)). Then we have 1 βT+T− T X t=1 Lt(wt) ≤min w∈W 1 βT+T− ℓpAUC(x1:T , y1:T , w) + O s 1 T+ + 1 T− ! . Notice that the above regret bound depends on both T+ and T−and the regret becomes large even if one of them is small. This is actually quite intuitive because if, say T+ = 1 and T−= T −1, an adversary may wish to provide the lone positive point in the last round. Naturally the algorithm, having only seen negatives till now, would not be able to perform well and would incur a large error. 3.2 Online-to-batch Conversion To present our bounds we generalize our framework slightly: we now consider the stream of T points to be composed of T/s batches Z1, . . . , ZT/s of size s each. Thus, the instantaneous penalty is now defined as Lt(w) = ℓP(Z1, . . . , Zt, w) −ℓP(Z1, . . . , Zt−1, w) for t = 1 . . . T/s and the regret becomes R(T, s) = 1 T PT/s t=1 Lt(wt) −arg minw∈W 1 T ℓP(x1:T , y1:T , w). Let RP denote the population risk for the (normalized) performance measure P. Then we have: Theorem 4. Suppose the sequence of points (xt, yt) is generated i.i.d. and let w1, w2, . . . , wT/s be an ensemble of models generated by an online learning algorithm upon receiving these T/s batches. Suppose the online learning algorithm has a guaranteed regret bound R(T, s). Then for w = 1 T/s PT/s t=1 wt, any w∗∈W, ϵ ∈(0, 0.5] and δ > 0, with probability at least 1 −δ, RP(w) ≤(1 + ϵ)RP(w∗) + R(T, s) + e−Ω(sϵ2) + ˜O r s ln(1/δ) T ! . In particular, setting s = ˜O( √ T) and ϵ = 4p 1/T gives us, with probability at least 1 −δ, RP(w) ≤RP(w∗) + R(T, √ T) + ˜O 4 r ln(1/δ) T ! . We conclude by noting that for Prec@k and pAUC, R(T, √ T) ≤O 4p 1/T (see Appendix E). 4 Stochastic Gradient Methods for Non-decomposable Losses The online learning algorithms discussed in the previous section present attractive guarantees in the sequential prediction model but are required to solve batch problems at each stage. This rapidly 5 Algorithm 1 1PMB: Single-Pass with Mini-batches Input: Step length scale η, Buffer B of size s Output: A good predictor w ∈W 1: w0 ←0, B ←φ, e ←0 2: while stream not exhausted do 3: Collect s data points (xe 1, ye 1), . . . , (xe s, ye s) in buffer B 4: Set step length ηe ← η √e 5: we+1 ←ΠW [we + ηe∇wℓP(xe 1:s, ye 1:s, we)] //ΠW projects onto the set W 6: Flush buffer B 7: e ←e + 1 //start a new epoch 8: end while 9: return w = 1 e Pe i=1 wi Algorithm 2 2PMB: Two-Passes with Mini-batches Input: Step length scale η, Buffers B+, B−of size s Output: A good predictor w ∈W Pass 1: B+ ←φ 1: Collect random sample of pos. x+ 1 , . . . , x+ s in B+ Pass 2: w0 ←0, B−←φ, e ←0 2: while stream of negative points not exhausted do 3: Collect s negative points xe− 1 , . . . , xe− s in B− 4: Set step length ηe ← η √e 5: we+1 ←ΠW we + ηe∇wℓP(xe− 1:s, x+ 1:s, we) 6: Flush buffer B− 7: e ←e + 1 //start a new epoch 8: end while 9: return w = 1 e Pe i=1 wi becomes infeasible for large scale data. To remedy this, we now present memory efficient stochastic gradient descent methods for batch learning with non-decomposable loss functions. The motivation for our approach comes from mini-batch methods used to make learning methods for point loss functions amenable to distributed computing environments [15, 16], we exploit these techniques to offer scalable algorithms for non-decomposable loss functions. Single-pass Method with Mini-batches. The method assumes access to a limited memory buffer and takes a pass over the data stream. The stream is partitioned into epochs. In each epoch, the method accumulates points in the stream, uses them to form gradient estimates and takes descent steps. The buffer is flushed after each epoch. Algorithm 1 describes the 1PMB method. Gradient computations can be done using Danskin’s theorem (see Appendix H). Two-pass Method with Mini-batches. The previous algorithm is unable to exploit relationships between data points across epochs which may help improve performance for loss functions such as pAUC. To remedy this, we observe that several real life learning scenarios exhibit mild to severe label imbalance (see Table 2 in Appendix H) which makes it possible to store all or a large fraction of points of the rare label. Our two pass method exploits this by utilizing two passes over the data: the first pass collects all (or a random subset of) points of the rare label using some stream sampling technique [13]. The second pass then goes over the stream, restricted to the non-rare label points, and performs gradient updates. See Algorithm 2 for details of the 2PMB method. 4.1 Error Bounds Given a set of n labeled data points (xi, yi), i = 1 . . . n and a performance measure P, our goal is to approximate the empirical risk minimizer w∗= arg min w∈W ℓP(x1:n, y1:n, w) as closely as possible. In this section we shall show that our methods 1PMB and 2PMB provably converge to the empirical risk minimizer. We first introduce the notion of uniform convergence for a performance measure. Definition 5. We say that a loss function ℓdemonstrates uniform convergence with respect to a set of predictors W if for some α(s, δ) = poly 1 s, log 1 δ , when given a set of s points ¯x1, . . . , ¯xs chosen randomly from an arbitrary set of n points {(x1, y1), . . . , (xn, yn)} then w.p. at least 1−δ, we have sup w∈W |ℓP(x1:n, y1:n, w) −ℓP(¯x1:s, ¯y1:s, w)| ≤α(s, δ). Such uniform convergence results are fairly common for decomposable loss functions such as the squared loss, logistic loss etc. However, the same is not true for non-decomposable loss functions barring a few exceptions [17, 10]. To bridge this gap, below we show that a large family of surrogate loss functions for popular non decomposable performance measures does indeed exhibit uniform convergence. Our proofs require novel techniques and do not follow from traditional proof progressions. However, we first show how we can use these results to arrive at an error bound. Theorem 6. Suppose the loss function ℓis convex and demonstrates α(s, δ)-uniform convergence. Also suppose we have an arbitrary set of n points which are randomly ordered, then the predictor 6 0 1 2 3 4 5 0.1 0.2 0.3 0.4 0.5 0.6 Training time (secs) Average pAUC in [0, 0.1] CP PSG 1PMB 2PMB (a) PPI 0 0.2 0.4 0.6 0.8 0.2 0.4 0.6 Training time (secs) Average pAUC in [0, 0.1] CP PSG 1PMB 2PMB (b) KDDCup08 0 0.5 1 1.5 0.2 0.4 0.6 Training time (secs) Average pAUC in [0, 0.1] CP PSG 1PMB 2PMB (c) IJCNN 0 0.1 0.2 0.3 0.1 0.2 0.3 0.4 0.5 0.6 Training time (secs) Average pAUC in [0, 0.1] CP PSG 1PMB 2PMB (d) Letter Figure 1: Comparison of stochastic gradient methods with the cutting plane (CP) and projected subgradient (PSG) methods on partial AUC maximization tasks. The epoch lengths/buffer sizes for 1PMB and 2PMB were set to 500. 0 2 4 6 8 10 0.1 0.2 0.3 Training time (secs) Average Prec@k CP 1PMB 2PMB (a) PPI 0 10 20 30 0.1 0.2 0.3 0.4 Training time (secs) Average Prec@k CP 1PMB 2PMB (b) KDDCup08 0 5 10 0.2 0.4 0.6 Training time (secs) Average Prec@k CP 1PMB 2PMB (c) IJCNN 0 0.2 0.4 0.6 0.8 0.1 0.2 0.3 0.4 0.5 Training time (secs) Average Prec@k CP 1PMB 2PMB (d) Letter Figure 2: Comparison of stochastic gradient methods with the cutting plane (CP) method on Prec@k maximization tasks. The epoch lengths/buffer sizes for 1PMB and 2PMB were set to 500. w returned by 1PMB with buffer size s satisfies w.p. 1 −δ, ℓP(x1:n, y1:n, w) ≤ℓP(x1:n, y1:n, w∗) + 2α s, sδ n + O r s n We would like to stress that the above result does not assume i.i.d. data and works for arbitrary datasets so long as they are randomly ordered. We can show similar guarantees for the two pass method as well (see Appendix F). Using regularized formulations, we can also exploit logarithmic regret guarantees [18], offered by online gradient descent, to improve this result – however we do not explore those considerations here. Instead, we now look at specific instances of loss functions that possess the desired uniform convergence properties. As mentioned before, due to the combinatorial nature of these performance measures, our proofs do not follow from traditional methods. Theorem 7 (Partial Area under the ROC Curve). For any convex, monotone, Lipschitz, classification surrogate φ : R →R+, the surrogate loss function for the (0, β)-partial AUC performance measure defined as follows exhibits uniform convergence at the rate α(s, δ) = O p log(1/δ)/s : 1 ⌈βn−⌉n+ X i:yi>0 X j:yj<0 T− β,t(xj, w) · φ(x⊤ i w −x⊤ j w) See Appendix G for a proof sketch. This result covers a large family of surrogate loss functions such as hinge loss (5), logistic loss etc. Note that the insistence on including only top ranked negative points introduces a high degree of non-decomposability into the loss function. A similar result for the special case β = 1 is due to [17]. We extend the same to the more challenging case of β < 1. Theorem 8 (Structural SVM loss for Prec@k). The structural SVM surrogate for the Prec@k performance measure (see (3)) exhibits uniform convergence at the rate α(s, δ) = O p log(1/δ)/s . We defer the proof to the full version of the paper. The above result can be extended to a large family of performances measures introduced in [3] that have been widely adopted [10, 19, 8] such as Fmeasure, G-mean, and PRBEP. The above indicates that our methods are expected to output models that closely approach the empirical risk minimizer for a wide variety of performance measures. In the next section we verify that this is indeed the case for several real life and benchmark datasets. 5 Experimental Results We evaluate the proposed stochastic gradient methods on several real-world and benchmark datasets. 7 Measure 1PMB 2PMB CP pAUC 0.10 (68.2) 0.15 (69.6) 0.39 (62.5) Prec@k 0.49 (42.7) 0.55 (38.7) 23.25 (40.8) Table 1: Comparison of training time (secs) and accuracies (in brackets) of 1PMB, 2PMB and cutting plane methods for pAUC (in [0, 0.1]) and Prec@k maximization tasks on the KDD Cup 2008 dataset. 10 0 10 2 10 4 0.42 0.44 0.46 0.48 0.5 0.52 0.54 Epoch length Average pAUC 1PMB 10 0 10 2 10 4 0.45 0.5 0.55 0.6 Epoch length Average pAUC 2PMB Figure 3: Performance of 1PMB and 2PMB on the PPI dataset with varying epoch/buffer sizes for pAUC tasks. Performance measures: We consider three measures, 1) partial AUC in the false positive range [0, 0.1], 2) Prec@k with k set to the proportion of positives (PRBEP), and 3) F-measure. Algorithms: For partial AUC, we compare against the state-of-the-art cutting plane (CP) and projected subgradient methods (PSG) proposed in [7]; unlike the (online) stochastic methods considered in this work, the PSG method is a ‘batch’ algorithm which, at each iteration, computes a subgradientbased update over the entire training set. For Prec@k and F-measure, we compare our methods against cutting plane methods from [3]. We used structural SVM surrogates for all the measures. Datasets: We used several data sets for our experiments (see Table 2 in Appendix H); of these, KDDCup08 is from the KDD Cup 2008 challenge and involves a breast cancer detection task [20], PPI contains data for a protein-protein interaction prediction task [21], and the remaining datasets are taken from the UCI repository [22]. Parameters: We used 70% of the data set for training and the remaining for testing, with the results averaged over 5 random train-test splits. Tunable parameters such as step length scale were chosen using a small validation set. The epoch lengths/buffer sizes were set to 500 in all experiments. Since a single iteration of the proposed stochastic methods is very fast in practice, we performed multiple passes over the training data (see Appendix H for details). The results for pAUC and Prec@k maximization tasks are shown in the Figures 1 and 2. We found the proposed stochastic gradient methods to be several orders of magnitude faster than the baseline methods, all the while achieving comparable or better accuracies. For example, for the pAUC task on the KDD Cup 2008 dataset, the 1PMB method achieved an accuracy of 64.81% within 0.03 seconds, while even after 0.39 seconds, the cutting plane method could only achieve an accuracy of 62.52% (see Table 1). As expected, the (online) stochastic gradient methods were faster than the ‘batch’ projected subgradient descent method for pAUC as well. We found similar trends on Prec@k (see Figure 2) and F-measure maximization tasks as well. For F-measure tasks, on the KDD Cup 2008 dataset, for example, the 1PMB method achieved an accuracy of 35.92 within 12 seconds whereas, even after 150 seconds, the cutting plane method could only achieve an accuracy of 35.25. The proposed stochastic methods were also found to be robust to changes in epoch lengths (buffer sizes) till such a point where excessively long epochs would cause the number of updates as well as accuracy to dip (see Figure 3). The 2PMB method was found to offer higher accuracies for pAUC maximization on several datasets (see Table 1 and Figure 1), as well as be more robust to changes in buffer size (Figure 3). We defer results on more datasets and performance measures to the full version of the paper. The cutting plane methods were generally found to exhibit a zig-zag behaviour in performance across iterates. This is because these methods solve the dual optimization problem for a given performance measure; hence the intermediate models do not necessarily yield good accuracies. On the other hand, (stochastic) gradient based methods directly offer progress in terms of the primal optimization problem, and hence provide good intermediate solutions as well. This can be advantageous in scenarios with a time budget in the training phase. Acknowledgements The authors thank Shivani Agarwal for helpful comments. They also thank the anonymous reviewers for their suggestions. HN thanks support from a Google India PhD Fellowship. 8 References [1] Alexander Rakhlin. Lecture Notes on Online Learning. http://www-stat.wharton.upenn. edu/˜rakhlin/papers/online_learning.pdf, 2009. [2] Harikrishna Narasimhan and Shivani Agarwal. A Structural SVM Based Approach for Optimizing Partial AUC. In 30th International Conference on Machine Learning (ICML), 2013. [3] Thorsten Joachims. A Support Vector Method for Multivariate Performance Measures. In ICML, 2005. [4] Yisong Yue, Thomas Finley, Filip Radlinski, and Thorsten Joachims. A Support Vector Method for Optimizing Average Precision. In SIGIR, 2007. [5] Soumen Chakrabarti, Rajiv Khanna, Uma Sawant, and Chiru Bhattacharyya. Structured Learning for Non-Smooth Ranking Losses. In KDD, 2008. [6] Brian McFee and Gert Lanckriet. Metric Learning to Rank. In ICML, 2010. [7] Harikrishna Narasimhan and Shivani Agarwal. SVMtight pAUC: A New Support Vector Method for Optimizing Partial AUC Based on a Tight Convex Upper Bound. In KDD, 2013. [8] Miroslav Kubat and Stan Matwin. Addressing the Curse of Imbalanced. Training Sets: One-Sided Selection. In 24th International Conference on Machine Learning (ICML), 1997. [9] Krzysztof Dembczy´nski, Willem Waegeman, Weiwei Cheng, and Eyke H¨ullermeier. An Exact Algorithm for F-Measure Maximization. In NIPS, 2011. [10] Nan Ye, Kian Ming A. Chai, Wee Sun Lee, and Hai Leong Chieu. Optimizing F-Measures: A Tale of Two Approaches. In 29th International Conference on Machine Learning (ICML), 2012. [11] Krzysztof Dembczy´nski, Arkadiusz Jachnik, Wojciech Kotlowski, Willem Waegeman, and Eyke H¨ullermeier. Optimizing the F-Measure in Multi-Label Classification: Plug-in Rule Approach versus Structured Loss Minimization. In 30th International Conference on Machine Learning (ICML), 2013. [12] Alexander Rakhlin, Karthik Sridharan, and Ambuj Tewari. Online Learning: Beyond Regret. In 24th Annual Conference on Learning Theory (COLT), 2011. [13] Purushottam Kar, Bharath K Sriperumbudur, Prateek Jain, and Harish Karnick. On the Generalization Ability of Online Learning Algorithms for Pairwise Loss Functions. In ICML, 2013. [14] Peilin Zhao, Steven C. H. Hoi, Rong Jin, and Tianbao Yang. Online AUC Maximization. In ICML, 2011. [15] Ofer Dekel, Ran Gilad-Bachrach, Ohad Shamir, and Lin Xiao. Optimal Distributed Online Prediction Using Mini-Batches. Journal of Machine Learning Research, 13:165–202, 2012. [16] Yuchen Zhang, John C. Duchi, and Martin J. Wainwright. Communication-Efficient Algorithms for Statistical Optimization. Journal of Machine Learning Research, 14:3321–3363, 2013. [17] St´ephan Cl´emenc¸on, G´abor Lugosi, and Nicolas Vayatis. Ranking and empirical minimization of Ustatistics. Annals of Statistics, 36:844–874, 2008. [18] Elad Hazan, Adam Kalai, Satyen Kale, and Amit Agarwal. Logarithmic Regret Algorithms for Online Convex Optimization. In COLT, pages 499–513, 2006. [19] Sophia Daskalaki, Ioannis Kopanas, and Nikolaos Avouris. Evaluation of Classifiers for an Uneven Class Distribution Problem. Applied Artificial Intelligence, 20:381–417, 2006. [20] R. Bharath Rao, Oksana Yakhnenko, and Balaji Krishnapuram. KDD Cup 2008 and the Workshop on Mining Medical Data. SIGKDD Explorations Newsletter, 10(2):34–38, 2008. [21] Yanjun Qi, Ziv Bar-Joseph, and Judith Klein-Seetharaman. Evaluation of Different Biological Data and Computational Classification Methods for Use in Protein Interaction Prediction. Proteins, 63:490–500, 2006. [22] A. Frank and Arthur Asuncion. The UCI Machine Learning Repository. http://archive.ics. uci.edu/ml, 2010. University of California, Irvine, School of Information and Computer Sciences. [23] Ankan Saha, Prateek Jain, and Ambuj Tewari. The interplay between stability and regret in online learning. CoRR, abs/1211.6158, 2012. [24] Martin Zinkevich. Online Convex Programming and Generalized Infinitesimal Gradient Ascent. In ICML, pages 928–936, 2003. [25] Robert J. Serfling. Probability Inequalities for the Sum in Sampling without Replacement. Annals of Statistics, 2(1):39–48, 1974. [26] Dimitri P. Bertsekas. Nonlinear Programming: 2nd Edition. Belmont, MA: Athena Scientific, 2004. 9
|
2014
|
190
|
5,281
|
On Model Parallelization and Scheduling Strategies for Distributed Machine Learning †Seunghak Lee, †Jin Kyu Kim, †Xun Zheng, §Qirong Ho, †Garth A. Gibson, †Eric P. Xing †School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 seunghak@, jinkyuk@, xunzheng@, garth@, epxing@cs.cmu.edu §Institute for Infocomm Research A*STAR Singapore 138632 hoqirong@gmail.com Abstract Distributed machine learning has typically been approached from a data parallel perspective, where big data are partitioned to multiple workers and an algorithm is executed concurrently over different data subsets under various synchronization schemes to ensure speed-up and/or correctness. A sibling problem that has received relatively less attention is how to ensure efficient and correct model parallel execution of ML algorithms, where parameters of an ML program are partitioned to different workers and undergone concurrent iterative updates. We argue that model and data parallelisms impose rather different challenges for system design, algorithmic adjustment, and theoretical analysis. In this paper, we develop a system for model-parallelism, STRADS, that provides a programming abstraction for scheduling parameter updates by discovering and leveraging changing structural properties of ML programs. STRADS enables a flexible tradeoff between scheduling efficiency and fidelity to intrinsic dependencies within the models, and improves memory efficiency of distributed ML. We demonstrate the efficacy of model-parallel algorithms implemented on STRADS versus popular implementations for topic modeling, matrix factorization, and Lasso. 1 Introduction Advancements in sensory technologies and digital storage media have led to a prevalence of “Big Data” collections that have inspired an avalanche of recent efforts on “scalable” machine learning (ML). In particular, numerous data-parallel solutions from both algorithmic [28, 10] and system [7, 25] angles have been proposed to speed up inference and learning on Big Data. The recently emerged parameter server architecture [15, 18] has started to pave ways for a unified programming interface for data parallel algorithms, based on various parallellization models such as stale synchroneous parallelism (SSP) [15], eager SSP [5], and value-bound asynchronous parallelism [23], etc. However, in addition to Big Data, modern large-scale ML problems have started to encounter the so-called Big Model challenge [8, 1, 17], in which models with millions if not billions of parameters and/or variables (such as in deep networks [6] or large-scale topic models [20]) must be estimated from big (or even modestly-sized) datasets. Such Big Model problems seem to have received less systematic investigation. In this paper, we propose a model-parallel framework for such an investigation. As is well known, a data-parallel algorithm parallelly computes a partial update of all model parameters (or latent model states in some cases) in each worker, based on only the subset of data on that worker and a local copy of the model parameters stored on that worker, and then aggregates these partial updates to obtain a global estimate of the model parameters [15]. In contrast, a model 1 parallel algorithm aims to parallelly update a subset of parameters on each worker — using either all data, or different subsets of the data [4] — in a way that preserves as much correctness as possible, by ensuring that the updates from each subset are highly compatible. Obviously, such a scheme directly alleviates memory bottlenecks caused by massive parameter sizes in big models; but even for small or mid-sized models, an effective model parallel scheme is still highly valuable because it can speed up an algorithm by updating multiple parameters concurrently, using multiple machines. While data-parallel algorithms such as stochastic gradient descent [27] can be advantageous over their sequential counterparts — thanks to concurrent processing over data using various boundedasynchronous schemes — they require every worker to have full access to all global parameters; furthermore they leverage on an assumption that different data subsets are i.i.d. given the shared global parameters. For a model-parallel program however, in which model parameters are distributed to different workers, one cannot blindly leverage such an i.i.d. assumption over arbitrary parameter subsets, because doing so will cause incorrect estimates due to incompatibility of sub-results from different workers (e.g., imagine trivially parallelizing a long, simplex-constrained vector across multiple workers — independent updates will break the simplex constraint). Therefore, existing dataparallel schemes and frameworks, that cannot support sophisticated constraint and/or consistency satisfiability mechanisms across workers, are not easily adapted to model-parallel programs. On the other hand, as explored in a number of recent works, explicit analysis of dependencies across model parameters, coupled with the design of suitable parallel schemes accordingly, opens up new opportunities for big models. For example, as shown in [4], model-parallel coordinate descent allows us to update multiple parameters in parallel, and our work in this paper further this approach by allowing some parameters to be prioritized over others. Furthermore, one can take advantage of model structures to avoid interference and loss of correctness during concurrent parameter updates (e.g., nearly independent parameters can be grouped to be updated in parallel [21]), and in this paper, we explore how to discover such structures in an efficient and scalable manner. To date, modelparallel algorithms are usually developed for a specific application such as matrix factorization [10] or Lasso [4] — thus, there is a need for developing programming abstractions and interfaces that can tackle the common challenges of Big Model problems, while also exposing new opportunities such as parameter prioritization to speed up convergence without compromising inference correctness. Effectively and conveniently programming a model-parallel algorithm stands as another challenge, as it requires mastery of detailed communication management in a cluster. Existing distributed frameworks such as MapReduce [7], Spark [25], and GraphLab [19] have shown that a variety of ML applications can be supported by a single, common programming interface (e.g. Map/Reduce or Gather/Apply/Scatter). Crucially, these frameworks allow the user to specify a coarse order to parameter updates, but automatically decide on the precise execution order — for example, MapReduce and Spark allow users to specify that parallel jobs should be executed in some topological order; e.g. mappers are guaranteed to be followed by reducers, but the system will execute the mappers in an arbitrary parallel or sequential order that it deems suitable. Similarly, GraphLab chooses the next node to be updated based on its “chromatic engine” and the user’s choice of graph consistency model, but the user only has loose control over the update order (through the input graph structure). While this coarse-grained, fully-automatic scheduling is certainly convenient, it does not offer the fine-grained control needed to avoid parallelization of parameters with subtle interdependencies that might not be present in the superficial problem or graph structure (which can then lead to algorithm divergence, as in Lasso [4]). Moreover, most of these frameworks do not allow users to easily prioritize parameters based on new criteria, for more rapid convergence (though we note that GraphLab allows node prioritization through a priority queue). It is true that data-parallel algorithms can be implemented efficiently on these frameworks, and in principle, one can also implement model-parallel algorithms on top of them. Nevertheless, we argue that without fine-grained control over parameter updates, we would miss many new opportunities for accelerating ML algorithm convergence. To address these challenges, we develop STRADS (STRucture-Aware Dynamic Scheduler), a system that performs automatic scheduling and parameter prioritization for dynamic Big Model parallelism, and is designed to enable investigation of new ML-system opportunities for efficient management of memory and accelerated convergence of ML algorithms, while making a best-effort to preserve existing convergence guarantees for model-parallel algorithms (e.g. convergence of Lasso under parallel coordinate descent). STRADS provides a simple abstraction for users to program ML algorithms, consisting of three “conceptual” actions: schedule, push and pull. Schedule specifies the next subset of model parameters to be updated in parallel, push specifies how individual workers 2 compute partial results on those parameters, and pull specifies how those partial results are aggregated to perform the full parameter update. A high-level view of STRADS is illustrated in Figure 1. We stress that these actions only specify the abstraction for managed model-parallel ML programs; they do not dictate the underlying implementation. A key-value store allows STRADS to handle a large number of parameters in distributed fashion, accessible from all master and worker machines. Push Pull Schedule Master Master Worker Key-value store Key-value store Key-value store Master Worker Worker Worker Worker Worker Worker Variable/Param R/W Variable/Param R/W Figure 1: High-level architecture of our STRADS system interface for dynamic model parallelism. As a showcase for STRADS, we implement and provide schedule/push/pull pseudocode for three popular ML applications: topic modeling (LDA), matrix factorization (MF), and Lasso. It is our hope that: (1) the STRADS interface enables Big Model problems to be solved in distributed fashion with modest programming effort, and (2) the STRADS mechanism accelerates the convergence Big ML algorithms through good scheduling (particularly through used-defined scheduling criteria). In our experiments, we present some evidence of STRADS’s success: topic modeling with 3.9M docs, 10K topics, and 21.8M vocabulary (200B parameters), MF with rank-2K on a 480K-by10K matrix (1B parameters), and Lasso with 100M features (100M parameters). 2 Scheduling for Big Model Parallelism with STRADS // Generic STRADS application schedule() { // Select U params x[j] to be sent // to the workers for updating ... return (x[j_1], ..., x[j_U]) } push(worker = p, pars = (x[j_1],...,x[j_U])) { // Compute partial update z for U params x[j] // at worker p ... return z } pull(workers = [p], pars = (x[j_1],...,x[j_U]), updates = [z]) { // Use partial updates z from workers p to // update U params x[j]. sync() is automatic. ... } Figure 2: STRADS interface: Basic functional signatures of schedule, push, pull, using pseudocode. “Model parallelism” refers to parallelization of an ML algorithm over the space of shared model parameters, rather than the space of (usually i.i.d.) data samples. At a high level, model parameters are the changing intermediate quantities that an ML algorithm iteratively updates, until convergence is reached. A key advantage of the model-parallel approach is that it explicitly partitions the model parameters into subsets, allowing ML problems with massive model spaces to be tackled on machines with limited memory (see supplement for details of STRADS memory usage). To enable users to systematically and programmatically exploit model parallelism, STRADS defines a programming interface, where the user writes three functions for a ML problem: schedule, push and pull (Figures 1, 2). STRADS repeatedly schedules and executes these functions in that order, thus creating an iterative model-parallel algorithm. Below, we describe the three functions. Schedule: This function selects U model parameters to be dispatched for updates (Figure 1). Within the schedule function, the programmer may access all data D and all model parameters x, in order to decide which U parameters to dispatch. A simple schedule is to select model parameters according to a fixed sequence, or drawn uniformly at random. As we shall later see, schedule also allows model parameters to be selected in a way that: (1) focuses on the fastest-converging parameters, while avoiding already-converged parameters; (2) avoids parallel dispatch of parameters with inter-dependencies, which can lead to divergence or parallelization errors. Push & Pull: These functions describe the flow of model parameters x from the scheduler to the workers performing the update equations, as in Fig 1. Push dispatches a set of parameters {xj1, . . . , xjU } to each worker p, which then computes a partial update z for {xj1, . . . , xjU } (or a subset of it). When writing push, the user can take advantage of data partitioning: e.g., when only a fraction 1 P of the data samples are stored at each worker, the p-th worker should compute partial results zp j = P Di fxj(Di) by iterating over its 1 P data points Di. Pull is used to collect the partial results {zp j } from all workers, and commit them to the parameters {xj1, . . . , xjU }. Our STRADS LDA, MF, and Lasso applications partition the data samples uniformly over machines. 3 3 Leveraging Model-Parallelism in ML Applications through STRADS In this section, we explore how users can apply model-parallelism to their ML applications, using STRADS. As case studies, we design and experiment on 3 ML applications — LDA, MF, and Lasso — in order to show that model-parallelism in STRADS can be simple to implement, yet also powerful enough to expose new and interesting opportunities for speeding up distributed ML. 3.1 Latent Dirichlet Allocation (LDA) // STRADS LDA schedule() { dispatch = [] // Empty list for a=1..U // Rotation scheduling idx = ((a+C-1) mod U) + 1 dispatch.append( V[q_idx] ) return dispatch } push(worker = p, pars = [V_a, ..., V_U]) { t = [] // Empty list for (i,j) in W[q_p] // Fast Gibbs sampling if w[i,j] in V_p t.append( (i,j,f_1(i,j,D,B)) ) return t } pull(workers = [p], pars = [V_a, ..., V_U], updates = [t]) { for all (i,j) // Update sufficient stats (D,B) = f_2([t]) } Figure 3: STRADS LDA pseudocode. Definitions for f1, f2, qp are in the text. C is a global model parameter. We introduce STRADS programming through topic modeling via LDA [3]. Big LDA models provide a strong use case for modelparallelism: when thousands of topics and millions of words are used, the LDA model contains billions of global parameters, and dataparallel implementations face the challenge of providing access to all these parameters; in contrast, model-parallellism explicitly divides up the parameters, so that workers only need to access a fraction of parameters at a given time. Formally, LDA takes a corpus of N documents as input — represented as word “tokens” wij ∈W , where i is the document index and j is the word position index — and outputs K topics as well as N K-dimensional topic vectors (soft assignments of topics to each document). LDA is commonly reformulated as a “collapsed” model [14], in which some of the latent variables are integrated out for faster inference. Inference is performed using Gibbs sampling, where each word-topic indicator (denoted zij ∈Z) is sampled in turn according to its distribution conditioned on all other parameters. To perform this computation without having to iterate over all W , Z, sufficient statistics are kept in the form of a “doc-topic” table D, and a “word-topic” table B. A full description of the LDA model is in the supplement. 0 100 200 300 −1 −0.5 0 0.5 1 1.5 2 2.5M vocab, 5K topics, 64 machines Iteration s−error STRADS Figure 4: STRADS LDA: Parallelization error ∆t at each iteration, on the Wikipedia unigram dataset with K = 5000 and 64 machines. STRADS implementation: In order to perform modelparallelism, we first identify the model parameters, and create a schedule strategy over them. In LDA, the assignments zij are the model parameters, while D, B are summary statistics over zij that are used to speed up the sampler. Our schedule strategy equally divides the V words into U subsets V1, . . . , VU (where U is the number of workers). Each worker will only sample words from one subset Va at a time (via push), and update the sufficient statistics D, W via pull. Subsequent invocations of schedule will “rotate” subsets amongst workers, so that every worker touches all U subsets every U invocations. For data partitioning, we divide the document tokens wij ∈W evenly across workers, and denote worker p’s set of tokens by Wqp, where qp is the index set for the p-th worker. Further details and analysis of the pseudocode, particularly how push-pull constitutes a model-parallel execution of LDA, are in the supplement. Model parallelism results in low error: Parallel Gibbs sampling is not generally guaranteed to converge [12], unless the parameters being sampled for concurrent updates are conditionally independent of each other. STRADS model-parallel LDA assigns workers to disjoint words V and documents wij; thus, each worker’s parameters zij are almost conditionally independent of other workers, resulting in very low sampling error 1. As evidence, we define an error score ∆t that measures the divergence between the true word-topic distribution/table B, versus the local copy seen at each worker (a full mathematical explanation is in the supplement). ∆t ranges from [0, 2] (where 0 means no error). Figure 4 plots ∆t for the “Wikipedia unigram” dataset (see §5 for 1This sampling error arises because workers see different versions B — which is an unavoidable when parallelizing LDA inference, because the Gibbs sampler is inherently sequential. 4 experimental details) with K = 5000 topics and 64 machines (128 processor cores total). ∆t is ≤0.002 throughout, confirming that STRADS LDA exhibits very small parallelization error. 3.2 Matrix Factorization (MF) // STRADS Matrix Factorization schedule() { // Round-robin scheduling if counter <= U // Do W return W[q_counter] else // Do H return H[r_(counter-U)] } push(worker = p, pars = X[s]) { z = [] // Empty list if counter <= U // X is from W for row in s, k=1..K z.append( (f_1(row,k,p),f_2(row,k,p)) ) else // X is from H for col in s, k=1..K z.append( (g_1(k,col,p),g_2(k,col,p)) ) return z } pull(workers=[p], pars=X[s], updates=[z]) { if counter <= U // X is from W for row in s, k=1..K W[row,k] = f_3(row,k,[z]) else // X is from H for col in s, k=1..K H[k,col] = g_3(k,col,[z]) counter = (counter mod 2*U) + 1 } Figure 5: STRADS MF pseudocode. Definitions for f1, g1, . . . and qp, rp are in the text. counter is a global model variable. We now consider matrix factorization (collaborative filtering), which can be used to predict users’ unknown preferences, given their known preferences and the preferences of others. Formally, MF takes an incomplete matrix A ∈RN×M as input, where N is the number of users, and M is the number of items. The idea is to discover rank-K matrices W ∈RN×K and H ∈RK×M such that WH ≈A. Thus, the product WH can be used to predict the missing entries (user preferences). Let Ωbe the set of indices of observed entries in A, let Ωi be the set of observed column indices in the ith row of A, and let Ωj be the set of observed row indices in the j-th column of A. Then, the MF task is defined by an optimization problem: minW,H P (i,j)∈Ω(ai j −wihj)2 + λ(∥W∥2 F + ∥H∥2 F ). We solve this objective using a parallel coordinate descent algorithm [24]. STRADS implementation: Our MF schedule strategy is to partition the rows of A into U disjoint index sets qp, and the columns of A into U disjoint index sets rp. We then dispatch the model parameters W, H in a round-robin fashion. To update the rows of W, each worker p uses push to compute partial summations on its assigned columns rp of A and H; the columns of H are updated similarly with rows qp of A and W. Finally, pull aggregates the partial summations, and then update the entries in W and H. In Figure 5, we show the STRADS MF pseudocode, and further details are in the supplement. 3.3 Lasso STRADS not only supports simple static schedules, but also dynamic, adaptive strategies that take the model state into consideration. Specifically, STRADS Lasso implementation schedules parameter updates by (1) prioritizing coefficients that contribute the most to algorithm convergence, and (2) avoiding the simultaneous update of coefficients whose dimensions are highly inter-dependent. These properties complement each other in an algorithmically efficient way, as we shall see. Formally, Lasso can be defined by an optimization problem: minβ 1 2 ∥y −Xβ∥2 2 + λ P j |βj|, where λ is a regularization parameter that determines the sparsity of β. We solve Lasso using coordinate descent (CD) update rule [9]: β(t) j ←S(xT j y −P j̸=k xT j xkβ(t−1) k , λ), where S(g, λ) := sign(β) (|g| −λ)+. STRADS implementation: Lasso schedule dynamically selects parameters to be updated with the following prioritization scheme: rapidly changing parameters are more frequently updated than others. First, we define a probability distribution c = [c1, . . . , cJ] over β; the purpose of c is to prioritize βj’s during schedule, and thus speed up convergence. In particular, we observe that choosing βj with probability cj = f1(j) :∝ δβ(t−1) j 2 + η substantially speeds up the Lasso convergence rate, where η is a small positive constant, and δβ(t−1) j = β(t−2) j −β(t−1) j . To prevent non-convergence due to dimension inter-dependencies [4], we only schedule βj and βk for concurrent updates if xT j xk ≈0. This is performed as follows: first, select L′(> L) indices of coefficients from the probability distribution c to form a set C (|C| = L′). Next, choose a subset B ⊂C of size L such that xT j xk < ρ for all j, k ∈B, where ρ ∈(0, 1]; we represent this selection procedure by the function f2(C). Note that this procedure is inexpensive: by selecting L′ candidate 5 βj’s first, only L′2 dependencies need to be checked, as opposed to J2, where J is the total number of features. Here L′ and ρ are user-defined parameters. We execute push and pull to update the coefficients indexed by B using U workers in parallel. The rows of the data matrix X are partitioned into U submatrices, and the p-th worker stores the submatrix Xqp ∈R|qp|×J; with X partitioned in this manner, we need to modify the CD update rule accordingly. Using U workers, push computes U partial summations for each selected βj, j ∈B, denoted by {z(t) j,1, . . . , z(t) j,U}, where zj,p represents the partial summation for βj in the p-th worker at the t-th iteration: z(t) j,p ←f3(p, j) := P i∈qp n (xi j)T y −P j̸=k(xi j)T (xi k)β(t−1) k o . After all pushes have been completed, pull updates βj via β(t) j = f4(j, [z(t) j,p]) := S(PU p=1 z(t) j,p, λ). // STRADS Lasso schedule() { // Priority-based scheduling for all j // Get new priorities c_j = f_1(j) for a=1..L’ // Prioritize betas random draw s_a using [c_1, ..., c_J] // Get ’safe’ betas (j_1, ..., j_L) = f_2(s_1, ..., s_L’) return (b[j_1], ..., b[j_L]) } push(worker = p, pars = (b[j_1],...,b[j_L])) { z = [] // Empty list for a=1..L // Compute partial sums z.append( f_3(p,j_a) ) return z } pull(workers = [p], pars = (b[j_1],...,b[j_L]), updates = [z]) { for a=1..L // Aggregate partial sums b[j_a] = f_4(j_a,[z]) } Figure 6: STRADS Lasso pseudocode. Definitions for f1, f2, . . . are given in the text. Analysis of STRADS Lasso scheduling We wish to highlight several notable aspects of the STRADS Lasso schedule mentioned above. In brief, the sampling distribution f1(j) and the model dependency control scheme with threshold ρ allow STRADS to speed up the convergence rate of Lasso. To analyze this claim, let us rewrite the Lasso problem by duplicating original features with opposite sign: F(β) := minβ 1 2 ∥y −Xβ∥2 2 + λ P2J j=1 βj. Here, with an abuse of notation, X contains 2J features and βj ≥0, for all j = 1, . . . , 2J. Then, we have the following analysis of our scheduling scheme. Proposition 1 Suppose B is the set of indices of coefficients updated in parallel at the t-th iteration, and ρ is sufficiently small constant such that ρδβ(t) j δβ(t) k ≈0, for all j ̸= k ∈ B. Then, the sampling distribution p(j) ∝ δβ(t) j 2 approximately maximizes a lower bound on EB F(β(t)) −F(β(t) + ∆β(t)) . Proposition 1 (see supplement for proof) shows that our scheduling attempts to speed up the convergence of Lasso by decreasing the objective as much as possible at every iteration. However, in practice, we approximate p(j) ∝ δβ(t) j 2 with f1(j) ∝δ β(t−1) j 2 + η because δβ(t) j is unavailable at the t-th iteration before computing β(t) j ; we add η to give all βj’s non-zero probability of being updated to account for the approximation. 4 STRADS System Architecture and Implementation Our STRADS system implementation uses multiple master/scheduler machines, multiple worker machines, and a single “master” coordinator2 machine that directs the activities of the schedulers and workers The basic unit of STRADS execution is a “round”, which consists of schedule-pushpull in that order. In more detail (Figure 1), (1) the masters execute schedule to pick U sets of model parameters x that can be safely updated in parallel (if the masters need to read parameters, they get them from the key-value stores); (2) jobs for push, which update the U sets of parameters, are dispatched via the coordinator to the workers (again, workers read parameters from the key-value stores), which then execute push to compute partial updates z for each parameter; (3) the key-value stores execute pull to aggregate the partial updates z, and keep newly updated parameters. To efficiently use multiple cores/machines in the scheduler pool, STRADS uses pipelined schedule computations, i.e., masters compute schedule and queue jobs in advance for future rounds. In other 2 The coordinator sends jobs from the masters and the workers, which does not bottleneck at the 10- to 100-machine scale explored in this paper. Distributing the coordinator is left for future work. 6 words, parameters to be updated are determined by the masters without waiting for workers’ parameter updates; the jobs for parameter updates are dispatched to workers in turn by the coordinator. By pipelining schedule, the master machines do not become a bottleneck even with a large number of workers. Specifically, the pipelined strategy does not occur any parallelization errors if parameters x for push can be ordered in a manner that does not depend on their actual values (e.g. MF and LDA applications). For programs whose schedule outcome depends on the current values of x (e.g. Lasso), the strategy is equivalent to executing schedule based on stale values of x, similar to how parameter servers allow computations to be executed on stale model parameters [15, 1]. In Lasso experiments in §5, such schedule strategy with stale values greatly improved its convergence rate. STRADS does not have to perform push-pull communication between the masters and the workers (which would bottleneck the masters). Instead, the model parameters x can be globally accessible through a distributed, partitioned key-value store (represented by standard arrays in our pseudocode). A variety of key-value store synchronization schemes exist, such as Bulk Synchronous Parallel (BSP), Stale Synchronous Parallel (SSP) [15], and Asynchronous Parallel (AP). In this paper, we use BSP synchronization; we leave the use of alternative schemes like SSP or AP as future work. We implemented STRADS using C++ and the Boost libraries, and OpenMPI 1.4.5 was used for asynchronous communication between the master schedulers, workers, and key-value stores. 5 Experiments We now demonstrate that our STRADS implementations of LDA, MF and Lasso can (1) reach larger model sizes than other baselines; (2) converge at least as fast, if not faster, than other baselines; (3) with additional machines, STRADS uses less memory per machine (efficient partitioning). For baselines, we used (a) a STRADS implementation of distributed Lasso with only a naive roundrobin scheduler (Lasso-RR), (b) GraphLab’s Alternating Least Squares (ALS) implementation of MF [19], (c) YahooLDA for topic modeling [1]. Note that Lasso-RR imitates the random scheduling scheme proposed by Shotgun algorithm on STRADS. We chose GraphLab and YahooLDA, as they are popular choices for distributed MF and LDA. We conducted experiments on two clusters [11] (with 2-core and 16-core machines respectively), to show the effectiveness of STRADS model-parallelism across different hardware. We used the 2-core cluster for LDA, and the 16-core cluster for Lasso and MF. The 2-core cluster contains 128 machines, each with two 2.6GHz AMD cores and 8GB RAM, and connected via a 1Gbps network interface. The 16-core cluster contains 9 machines, each with 16 2.1GHz AMD cores and 64GB RAM, and connected via a 40Gbps network interface. Both clusters exhibit a 4GB memory-to-CPU ratio, a setting commonly observed in the machine learning literature [22, 13], which closely matches the more cost-effective instances on Amazon EC2. All our experiments use a fixed data size, and we vary the number of machines and/or the model size (unless otherwise stated); furthermore, for Lasso, we set λ = 0.001, and for MF, we set λ = 0.05. 5.1 Datasets Latent Dirichlet Allocation We used 3.9M English Wikipedia abstracts, and conducted experiments using both unigram (1-word) tokens (V = 2.5M unique unigrams, 179M tokens) and bigram (2-word) tokens [16] (V = 21.8M unique bigrams, 79M tokens). We note that our bigram vocabulary (21.8M) is an order of magnitude larger than recently published results [1], demonstrating that STRADS scales to very large models. We set the number of topics to K = 5000 and 10000 (also larger than recent literature [1]), which yields extremely large word-topic tables: 25B elements (unigram) and 218B elements (bigram). Matrix Factorization We used the Nexflix dataset [2] for our MF experiments: 100M anonymized ratings from 480,189 users on 17,770 movies. We varied the rank of W, H from K = 20 to 2000, which exceeds the upper limit of previous MF papers [26, 10, 24]. Lasso We used synthetic data with 50K samples and J = 10M to 100M features, where every feature xj has only 25 non-zero samples. To simulate correlations between adjacent features (which exist in real-world data sets), we first generate x1 ∼Unif(0, 1). Then, with 0.9 probability, we make xj ∼Unif(0, 1), and with 0.1 probability, xj ∼0.9xj−1 + 0.1Unif(0, 1) for j = 2, . . . , J. 5.2 Speed and Model Sizes Figure 7 shows the time taken by each algorithm to reach a fixed objective value (over a range of model sizes), as well as the largest model size that each baseline was capable of running. For LDA and MF, STRADS handles much larger model sizes than either YahooLDA (could handle 5K topics 7 2.5M/5k 2.5M/10k 21.8M/5k 21.8M/10k 0 1000 2000 3000 4000 5000 19144 64 machines Vocab/Topics Seconds STRADS YahooLDA 20 40 80 160 320 1000 2000 0 200 400 600 800 1000 1200 1400 6620 34194 9 machines Ranks Seconds STRADS GraphLab 10M 50M 100M 0 500 1000 1500 2000 2500 3000 9 machines Features Seconds STRADS Lasso−RR Figure 7: Convergence time versus model size for STRADS and baselines for (left) LDA, (center) MF, and (right) Lasso. We omit the bars if a method did not reach 98% of STRADS’s convergence point (YahooLDA and GraphLab-MF failed at 2.5M-Vocab/10K-topics and rank K ≥80, respectively). STRADS not only reaches larger model sizes than YahooLDA, GraphLab, and Lasso-RR, but also converges significantly faster. 0 1 2 3 4 5 x 10 4 −3.5 −3 −2.5 x 10 9 2.5M vocab, 5K topics 32 machines Seconds Log−Likelihood STRADS YahooLDA 0 50 100 150 0.5 1 1.5 2 2.5 80 ranks 9 machines Seconds RMSE STRADS GraphLab 0 500 1000 0.05 0.1 0.15 0.2 0.25 100M features 9 machines Seconds Objective STRADS Lasso−RR Figure 8: Convergence trajectories of different methods for (left) LDA, (center) MF, and (right) Lasso. on the unigram dataset) or GraphLab (could handle rank < 80), while converging more quickly; we attribute STRADS’s faster convergence to lower parallelization error (LDA only) and reduced synchronization requirements through careful model partitioning (LDA, MF). We observed that each YahooLDA worker stores a portion of the word-topic table — specifically, those elements referenced by the words in the worker’s data partition. Because our experiments feature very large vocabulary sizes, even a small fraction of the word-topic table can still be too large for a single machine’s memory, which caused YahooLDA to fail on the larger experiments. For Lasso, STRADS converges more quickly than Lasso-RR because of our dynamic schedule strategy, which is graphically captured in the convergence trajectory seen in Figure 8 — observe that STRADS’s dynamic schedule causes the Lasso objective to plunge quickly to the optimum at around 250 seconds. We also see that STRADS LDA and MF achieved better objective values than the other baselines, confirming that STRADS model-parallelism is fast without compromising convergence quality. 5.3 Scalability 0 1 2 3 x 10 4 −3.4 −3.2 −3 −2.8 −2.6 −2.4 x 10 9 2.5M vocab, 5K topics Seconds Log−Likelihood STRADS (16 machines) STRADS (32 machines) STRADS (64 machines) STRADS (128 machines) 16 32 64 128 0 1 2 3 4 5 6 7 8 x 10 4 2.5M vocab, 5K topics Number of machines Seconds STRADS (16 machines) STRADS (32 machines) STRADS (64 machines) STRADS (128 machines) Figure 9: STRADS LDA scalablity with increasing machines using a fixed model size. (Left) Convergence trajectories; (Right) Time taken to reach a log-likelihood of −2.6 × 109. In Figure 9, we show the convergence trajectories and time-to-convergence for STRADS LDA using different numbers of machines at a fixed model size (unigram with 2.5M vocab and 5K topics). The plots confirm that STRADS LDA exhibits faster convergence with more machines, and that the time to convergence almost halves with every doubling of machines (near-linear scaling). 6 Conclusions In this paper, we presented a programmable framework for dynamic Big Model-parallelism that provides the following benefits: (1) scalability and efficient memory utilization, allowing larger models to be run with additional machines; (2) the ability to invoke dynamic schedules that reduce model parameter dependencies across workers, leading to lower parallelization error and thus faster, correct convergence. An important direction for future research would be to reduce the communication costs of using STRADS. We also want to explore the use of STRADS for other popular ML applications, such as support vector machines and logistic regression. Acknowledgments This work was done under support from NSF IIS1447676, CNS-1042543 (PRObE [11]), DARPA FA87501220324, and support from Intel via the Intel Science and Technology Center for Cloud Computing (ISTC-CC). References [1] A. Ahmed, M. Aly, J. Gonzalez, S. Narayanamurthy, and A. J. Smola. Scalable inference in latent variable models. In WSDM, 2012. 8 [2] J. Bennett and S. Lanning. The Netflix prize. In Proceedings of KDD cup and workshop, 2007. [3] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent dirichlet allocation. The Journal of Machine Learning Research, 3:993–1022, 2003. [4] J. K. Bradley, A. Kyrola, D. Bickson, and C. Guestrin. Parallel coordinate descent for l1-regularized loss minimization. In ICML, 2011. [5] W. Dai, A. Kumar, J. Wei, Q. Ho, G. Gibson, and E. P. Xing. High-performance distributed ML at scale through parameter server consistency models. In AAAI, 2014. [6] J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, Q. V. Le, M. Z. Mao, M. Ranzato, A. W. Senior, P. A. Tucker, et al. Large scale distributed deep networks. In NIPS, 2012. [7] J. Dean and S. Ghemawat. MapReduce: simplified data processing on large clusters. Communications of the ACM, 51(1):107–113, 2008. [8] J. Fan, R. Samworth, and Y. Wu. Ultrahigh dimensional feature selection: beyond the linear model. The Journal of Machine Learning Research, 10:2013–2038, 2009. [9] J. Friedman, T. Hastie, H. Hofling, and R. Tibshirani. Pathwise coordinate optimization. Annals of Applied Statistics, 1(2):302–332, 2007. [10] R. Gemulla, E. Nijkamp, P. J. Haas, and Y. Sismanis. Large-scale matrix factorization with distributed stochastic gradient descent. In SIGKDD, 2011. [11] G. Gibson, G. Grider, A. Jacobson, and W. Lloyd. PRObE: A thousand-node experimental cluster for computer systems research. USENIX; login, 38, 2013. [12] J. Gonzalez, Y. Low, A. Gretton, and C. Guestrin. Parallel gibbs sampling: From colored fields to thin junction trees. In AISTATS, 2011. [13] J. Gonzalez, Y. Low, H. Gu, D. Bickson, and C. Guestrin. PowerGraph: Distributed graph-parallel computation on natural graphs. In OSDI, 2012. [14] T. L. Griffiths and M. Steyvers. Finding scientific topics. Proceedings of the National Academy of Sciences of the United States of America, 101(Suppl 1):5228–5235, 2004. [15] Q. Ho, J. Cipar, H. Cui, J. Kim, S. Lee, P. B. Gibbons, G. Gibson, G. R. Ganger, and E. P. Xing. More effective distributed ML via a stale synchronous parallel parameter server. In NIPS, 2013. [16] Jey Han Lau, Timothy Baldwin, and David Newman. On collocations and topic models. ACM Transactions on Speech and Language Processing (TSLP), 10(3):10, 2013. [17] Q. V. Le, M. A. Ranzato, R. Monga, M. Devin, K. Chen, G. S. Corrado, J. Dean, and A. Y. Ng. Building high-level features using large scale unsupervised learning. In ICML, 2012. [18] M. Li, D. G. Andersen, J. W. Park, A. J. Smola, A. Ahmed, V. Josifovski, J. Long, E. J. Shekita, and B. Su. Scaling distributed machine learning with the parameter server. In OSDI, 2014. [19] Y. Low, J. Gonzalez, A. Kyrola, D. Bickson, C. Guestrin, and J. M. Hellerstein. Distributed GraphLab: A framework for machine learning and data mining in the cloud. In VLDB, 2012. [20] D. Newman, A. Asuncion, P. Smyth, and M. Welling. Distributed algorithms for topic models. The Journal of Machine Learning Research, 10:1801–1828, 2009. [21] C. Scherrer, A. Tewari, M. Halappanavar, and D. Haglin. Feature clustering for accelerating parallel coordinate descent. In NIPS, 2012. [22] Y. Wang, X. Zhao, Z. Sun, H. Yan, L. Wang, Z. Jin, L. Wang, Y. Gao, J. Zeng, Q. Yang, et al. Towards topic modeling for big data. arXiv:1405.4402 [cs.IR], 2014. [23] J. Wei, W. Dai, A. Kumar, X. Zheng, Q. Ho, and E. P. Xing. Consistent bounded-asynchronous parameter servers for distributed ML. arXiv:1312.7869 [stat.ML], 2013. [24] H. Yu, C. Hsieh, S. Si, and I. Dhillon. Scalable coordinate descent approaches to parallel matrix factorization for recommender systems. In ICDM, 2012. [25] M. Zaharia, M. Chowdhury, M. J. Franklin, S. Shenker, and I. Stoica. Spark: Cluster computing with working sets. In HotCloud, 2010. [26] Y. Zhou, D. Wilkinson, R. Schreiber, and R. Pan. Large-scale parallel collaborative filtering for the netflix prize. In AAIM, 2008. [27] M. Zinkevich, J. Langford, and A. J. Smola. Slow learners are fast. In NIPS, 2009. [28] M. Zinkevich, M. Weimer, L. Li, and A. J. Smola. Parallelized stochastic gradient descent. In NIPS, 2010. 9
|
2014
|
191
|
5,282
|
Shape and Illumination from Shading using the Generic Viewpoint Assumption Daniel Zoran ∗ CSAIL, MIT danielz@mit.edu Dilip Krishnan ∗ CSAIL, MIT dilipkay@mit.edu Jose Bento Boston College jose.bento@bc.edu William T. Freeman CSAIL, MIT billf@mit.edu Abstract The Generic Viewpoint Assumption (GVA) states that the position of the viewer or the light in a scene is not special. Thus, any estimated parameters from an observation should be stable under small perturbations such as object, viewpoint or light positions. The GVA has been analyzed and quantified in previous works, but has not been put to practical use in actual vision tasks. In this paper, we show how to utilize the GVA to estimate shape and illumination from a single shading image, without the use of other priors. We propose a novel linearized Spherical Harmonics (SH) shading model which enables us to obtain a computationally efficient form of the GVA term. Together with a data term, we build a model whose unknowns are shape and SH illumination. The model parameters are estimated using the Alternating Direction Method of Multipliers embedded in a multi-scale estimation framework. In this prior-free framework, we obtain competitive shape and illumination estimation results under a variety of models and lighting conditions, requiring fewer assumptions than competing methods. 1 Introduction The generic viewpoint assumption (GVA) [5, 9, 21, 22] postulates that what we see in the world is not seen from a special viewpoint, or lighting condition. Figure 1 demonstrates this idea with the famous Necker cube example1. A three dimensional cube may be observed with two vertices or edges perfectly aligned, giving rise to a two dimensional interpretation. Another possibility is a view that exposes only one of the faces of the cube, giving rise to a square. However, these 2D views are unstable to slight perturbations in viewing position. Other examples in [9] and [22] show situations where views are unstable to lighting rotations. While there has been interest in the GVA in the psychophysics community [22, 12], to the best of our knowledge, this principle seems to have been largely ignored in the computer vision community. One notable exception is the paper by Freeman [9] which gives a detailed analytical account on how to incorporate the GVA in a Bayesian framework. In that paper, it is shown that using the GVA modifies the probability space of different explanations to a scene, preferring perceptually valid and stable solutions to contrived and unstable ones, even though all of these fully explain the observed image. No algorithm incorporating the GVA, beyond exhaustive search, was proposed. ∗Equal contribution 1Taken from http://www.cogsci.uci.edu/˜ddhoff/three-cubes.gif 1 Figure 1: Illustration of the GVA principle using the Necker cube example. The cube in the middle can be viewed in multiple ways. However, the views on the left and right require a very specific viewing angle. Slight rotations of the viewer around the exact viewing positions would dramatically change the observed image. Thus, these views are unstable to perturbations. The middle view, on the contrary, is stable to viewer rotations. Shape from shading is a basic low-level vision task. Given an input shading image - an image of a constant albedo object depicting only changes in illumination - we wish to infer the shape of the objects in the image. In other words, we wish to recover the relative depth Zi at each pixel i in the image. Given values of Z, local surface orientations are given by the gradients ∇xZ and ∇yZ along the coordinate axes. A key component in estimating the shape is the illumination L. The parameters of L may be given with the image, or may need to be estimated from the image along with the shape. The latter is a much harder problem due to the ambiguous nature of the problem, as many different surface orientations and light combinations may explain the same image. While the notion of a shading image may seem unnatural, extracting them from natural images has been an active field of research. There are effective ways of decomposing images into shading and albedo images (so called “intrinsic images” [20, 10, 1, 29]), and the output of those may be used as input to shape from shading algorithms. In this paper we show how to effectively utilize the GVA for shape and illumination estimation from a single shading image. The only terms in our optimization are the data term which explains the observation and the GVA term. We propose a novel shading model which is a linearization of the spherical harmonics (SH) shading model [25]. The SH model has been gaining popularity in the vision and graphics communities in recent years [26, 17]) as it is more expressive than the popular single source Lambertian model. Linearizing this model allows us, as we show below, to get simple expressions for our image and GVA terms, enabling us to use them effectively in an optimization framework. Given a shading image with an unknown light source, our optimization procedure solves for the depth and illumination in the scene. We optimize using Alternating Direction Method of Multipliers (ADMM) [4, 6]. We show that this method is competitive with current shape and illumination from shading algorithms, without the use of other priors over illumination or geometry. 2 Related Work Classical works on shape from shading include [13, 14, 15, 8, 23] and newer works include [3, 2, 19, 30]. It is out of scope of this paper to give a full survey of this well studied field, and we refer the reader to [31] and [28] for good reviews. A large part of the research has been focused on estimating the shape under known illumination conditions. While still a hard problem, it is more constrained than estimating both the illumination and the shape. In impressive recent work, Barron and Malik [3] propose a method for estimating not just the illumination and shape, but also the albedo of a given masked object from a single image. By using a number of novel (and carefully balanced) priors over shape (such as smoothness and contour information), albedo and illumination, it is shown that reasonable estimates of shape and illumination may be extracted. These priors and the data term are combined in a novel multi-scale framework which weights coarser scale (lower frequency) estimates of shape more than finer scale estimates. Furthermore, Barron and Malik use a spherical harmonics lighting model to provide for richer recovery of real world scenes and diffuse outdoor lighting conditions. Another contribution of their work has been the observation that joint inference of multiple parameters may prove to be more robust (although this is hard to prove rigorously). The expansion to the original MIT dataset [11] provided in [3] is also a useful contribution. 2 Another recent notable example is that of Xiong et al. [30]. In this thorough work, the distribution of possible shape/illumination combinations in a small image patch is derived, assuming a quadratic depth model. It is shown that local patches may be quite informative, and that are only a few possible explanations of light/shape pairs for each patch. A framework for estimating full model geometry with known lighting conditions is also proposed. 3 Using the Generic View Assumption for Shape from Shading In [9], Freeman gave an analytical framework to use the GVA. However, the computational examples in the paper were restricted to linear shape from shading models. No inference algorithm was presented; instead the emphasis was on analyzing how the GVA term modifies the posterior distribution of candidate shape and illumination estimates. The key idea in [9] is to marginalize the posterior distribution over a set of “nuisance” parameters - these correspond to object or illumination perturbations. This integration step corresponds to finding a solution that is stable to these perturbations. 3.1 A Short Introduction to the GVA Here we give a short summary of the derivations in [9], which we use in our model. We start with a generative model f for images, which depends on scene parameters Q and a set of generic parameters w. The generative model we use is explained in Section 4. w are the parameters which will eventually be marginalized. In our shape and illumination from shading case, f corresponds to our shading model in Eq. 14 (defined below). Q includes both surface depth at each point Z and the light coefficients vector L. Finally, the generic variable w corresponds to different object rotation angles around different axes of rotations (though there could be other generic variables, we only use this one). Assuming measurement noise η the result of the generative process would be: I = f(Q, w) + η (1) Now, given an image I we wish to infer scene parameters Q by marginalizing out the generic variables w. Using Bayes’ theorem, this results in the following probability function: P(Q|I) = P(Q) P(I) Z w P(w)P(I|Q, w)dw (2) Assuming a low Gaussian noise model for η, the above integral can be approximated with a Laplace approximation, which involves expanding f using a Taylor expansion around w0. We get the following expression, aptly named in [9] as the ”scene probability equation”: P(Q|I) = C |{z} constant exp −∥I −f(Q, w0)∥2 2σ2 | {z } fidelity P(Q)P(w0) | {z } prior 1 √ det A | {z } genericity (3) where A is a matrix whose i, j-th entry is: Ai,j = ∂f(Q, w) ∂wi T ∂f(Q, w) ∂wj (4) and the derivatives are estimated at w0. A is often called the Fisher information matrix. Eq. 3 has three terms: the fidelity term (sometimes called the likelihood term, data term or image term) tells us how close we are to the observed image. The prior tells us how likely are our current parameter estimates. The last term, genericity, tells us how much our observed image would change under perturbations of the different generic variables. This term is the one which penalizes for unstable results w.r.t to the generic variables. From the form of A, it is clear why the genericity term helps; the determinant of A is large when the rendered image f changes rapidly with respect to w. This makes the genericity term small and the corresponding hypothesis Q less probable. 3 3.2 Using the GVA for Shape and Illumination Estimation We now show how to derive the GVA term for general object rotations by using the result in [9] and applying it to our linearized shading model. Due to lack of space, we provide the main results here; please refer to the supplementary material for full details. Given an axis of rotation parametrized by angles θ and γ, the derivative of f w.r.t to a rotation φ about the axis is: ∂f ∂φ = aRx + bRy + cRz (5) a = cos(θ) sin(γ), b = sin(θ) sin(γ), c = cos(γ) (6) where Rx,Ry and Rz are three derivative images for rotations around the canonical axes for which the i-th pixel is: Rx i = Ix i Zi + αiβikx i + (1 + β2 i )ky i (7) Ry i = −Iy i Zi −αiβiky i −(1 + α2 i )kx i (8) Rz i = Ix i Yi −Iy i Xi + αiky i −βikx i (9) We use these images to derive the GVA term for rotations around different axes, resulting in: GVA(Z, L) = X θ∈Θ X γ∈Γ 1 q 2πσ2∥∂f ∂φ∥2 (10) where Θ and Γ are discrete sets of angles in [0, π) and [0, 2π) respectively. Looking at the term in Eqs. 5–10 we see that had we used the full, non-linearized, shading model in Eq. 11 it would result in a very complex expression, especially considering that α = ∇xZ and β = ∇yZ are functions of the depth Z. Even after linearization, this expression may seem a bit daunting, but we show in Section 5 how we can significantly simplify the optimization of this function. 4 Linearized Spherical Harmonics Shading Model The Spherical Harmonics (SH) lighting2 model allows for a rich yet concise description of a lighting environment [25]. By keeping just a few of the leading SH coefficients when describing the illumination, it allows an accurate description for low frequency changes of lighting as a function of direction, without needing to explicitly model the lighting environment in whole. This model has been used successfully in the graphics and the vision communities. The popular setting for SH lighting is to keep the first three orders of the SH functions, resulting in nine coefficients which we will denote by the vector L. Let Z be a depth map, with the depth at pixel i given by Zi. The surface slopes at pixel i are defined as αi = (∇xZ)i and βi = (∇yZ)i respectively. Given L and Z, the log shading at pixel i for a diffuse, Lambertian surface under the SH model is given by: log Si = nT i Mni (11) where ni: ni = h αi √ 1+α2 i +β2 i βi √ 1+α2 i +β2 i 1 √ 1+α2 i +β2 i 1 iT (12) and: M = c1L9 c1L5 c1L8 c2L4 c1L5 −c1L9 c1L6 c2L2 c1L8 c1L6 c3L7 c2L3 c2L4 c2L2 c2L3 c4L1 −c5L7 (13) c1 = 0.429043, c2 = 0.511664, c3 = 0.743125, c4 = 0.886227, c5 = 0.247708 The formation model in Eq. 11 is non-linear and non-convex in the surface slopes α and β. In practice, this leads to optimization difficulties such as local minima, which have been noted by Barron and Malik in [3]. In order to overcome this, we linearize Eq. 11 around the local surface slope estimate α0 i and β0 i , such that: log Si ≈kc(α0 i , β0 i , L) + kx(α0 i , β0 i , L)αi + ky(α0 i , β0 i , L)βi (14) 2We will use the terms lighting and shading interchangeably 4 where the local surface slopes are estimated in a local patch around each pixel in our current estimated surface. The derivation of the linearization is given in the supplementary material. For the sake of brevity, we will omit the dependence on the α0 i , β0 i and L terms, and denote the coefficients at each location as kc i ,kx i and ky i respectively for the remainder of the paper. A natural question is the accuracy of the linearized model Eq. 14. The linearization is accurate in most situations where the depth Z changes gradually, such that the change in slope is linear or small in magnitude. In [30], locally quadratic shapes are assumed; this leads to linear changes in slopes, and in such situations, the linearization is highly accurate. We tested the accuracy of the linearization by computing the difference between the estimates in Eq. 14 and Eq. 11, over ground truth shape and illumination estimates. We found it to be highly accurate for the models in our experiments. The linearization in Eq. 14 leads to a quadratic formation model for the image term (described in Section 5.2.1), leading to more efficient updates for α and β. Furthermore, this allows us to effectively incorporate the GVA even with the spherical harmonics framework. 5 Optimization using the Alternating Direction Method of Multipliers 5.1 The Cost Function Following Eq. 3, we can now derive the cost function we will optimize w.r.t the scene parameters Z and L. To derive a MAP estimate, we take the negative log of Eq. 3 and use constant priors over both the scene parameters and the generic variables; thus we have a prior-free cost function. This results in the following cost: g(Z, L) = λimg∥I −log S(Z, L)∥2 −λGVA log GVA(Z, L) (15) where f(Z, L) = log S(Z, L) is our linearized shading model Eq. 14 and the GVA term is defined in Eq. 10. λimg and λGVA are hyper-parameters which we set to 2 and 1 respectively for all experiments. Because of the dependence of α and β on Z directly optimizing for this cost function is hard, as it results in a large, non-linear differential system for Z. In order to make this more tractable, we introduce ˜α and ˜β, the surface spatial derivatives, as auxiliary variables, and solve for the following cost function which constrains the resulting surface to be integrable: ˜g(Z, ˜α, ˜β, L|I) = λimg∥I −log S(˜α, ˜β, L)∥2 −λGVA log GVA(Z, ˜α, ˜β, L) (16) s.t ˜α = ∇xZ, ˜β = ∇yZ, ∇y∇xZ = ∇x∇yZ ADMM allows us to subdivide the cost into relatively simple subproblems, solve each one independently and then aggregate the results. We briefly review the message passing variant of ADMM [7] in the supplementary material. 5.2 Subproblems 5.2.1 Image Term This subproblem ties our solution to the input log shading image. The participating variables are the slopes ˜α and ˜β and illumination L. We minimize the following cost: arg min ˜α, ˜β,L λimg X i Ii −kc i −kx i ˜αi −ky i ˜βi 2 + ρ 2∥˜α −n˜α∥2 + ρ 2∥˜β −n ˜β∥2 + ρ 2∥L −nL∥2 (17) where n˜α, n ˜β and nL are the incoming messages for the corresponding variables as described above. We solve this subproblem iteratively: for ˜α and ˜β we keep L constant (and as a result the k-s are constant). A closed form solution exists since this is just a quadratic due to our relinearization model. In order to solve for L we do a few (5 to 10) steps of L-BFGS [27]. 5.2.2 GVA Term The participating variables here are the depth values Z, the slopes ˜α and ˜β and the light L. We look for the parameters which minimize: arg min Z,˜α, ˜β,L −λGVA 2 log GVA(Z, ˜α, ˜β, L) + ρ 2∥˜α −n˜α∥2 + ρ 2∥˜β −n ˜β∥2 + ρ 2∥L −nL∥2 (18) 5 Here, though the expression for the GVA (Eq. 10) term is greatly simplified due to the shading model linearization, we have to resort to numerical optimization. We solve for the parameters using a few steps of L-BFGS [27]. 5.2.3 Depth Integrability Constraint Shading only depends on local slope (regardless of the choice of shading model, as long as there are no shadows in the scene), hence the image term only gives us information about surface slopes. Using this information we need to find an integrable surface Z [8]. Finding integrable surfaces from local slope measurements has been a long standing research question and there are several ways of doing this [8, 14, 18]. By finding such as a surface we will satisfy both constraints in Eq. 16 automatically. Enforcing integrability through message passing was performed in [24], where it was shown to be helpful in recovering smooth surfaces. In that work, belief propagation based messagepassing was used. The cost for this subproblem is: arg min Z,˜α, ˜β ρ 2∥Z −nZ∥2 + ρ 2∥˜α −n˜α∥2 + ρ 2∥˜β −n ˜β∥2 (19) s.t ˜α = ∇xZ, ˜β = ∇yZ, ∇y∇xZ = ∇x∇yZ We solve for the surface Z given the messages for the slopes n˜α and n ˜β by solving a least squares system to get the integrable surface. Then, the solution for ˜α and ˜β is just the spatial derivative of the resulting surface, satisfying all the constraints and minimizing the cost simultaneously. 5.3 Relinearization After each ADMM iteration, we perform re-linearization of the kc,kx and ky coefficients. We take the current estimates for Z and L and use them as input to our linearization procedure (see the supplementary material for details). These coefficients are then used for the next ADMM iteration. and this process is repeated. 6 Experiments and Results N−MAE L−MSE 0 0.2 0.4 0.6 0.8 SIFS Ours −GVA Ours −No GVA (a) Models from [30] using “lab” lights N−MAE L−MSE 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 SIFS Ours −GVA Ours −No GVA (b) MIT models using “natural” lights N−MAE L−MSE 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 SIFS Ours −GVA Ours −No GVA (c) Average result over all models and lights Figure 2: Summary of results: Our performance is quite similar to that of SIFS [3] although we do not use contour normals, nor any shape or illumination priors unlike [3]. We outperform SIFS on models from [30], while SIFS performs well on the MIT models. On average, we are comparable to SIFS in N-MAE and sightly better at light estimation. We use the GVA algorithm to estimate shape and illumination from synthetic, grayscale shading images, rendered using 18 different models from the MIT/Berkeley intrinsic images dataset [3] and 7 models from the Harvard dataset in [30]. Each of these models is rendered using several different light sources: the MIT models are lit with a ”natural” light dataset which comes with each model, and we use 2 lights from the ”lab” dataset in order to light the models from [30], resulting in 32 different images. We use the provided mask just in the image term, where we solve only for pixels within the mask. We do not use any other contour information as in [3]. Models were downscaled to a quarter of their original size. Running times for our algorithm are roughly 7 minutes per image 6 Ground Truth Ours - GVA SIFS Ours - No GVA Viewpoint 1 Viewpoint 2 Estimated Light Rendered Image Figure 3: Example of our results - note that the vertical scale of the mesh plots is different between the plots and have been rescaled for display (specifically, the SIFS result are 4 times deeper). Our method preserves features such as the legs and belly while SIFS smoothes them out. The GVA light estimate is also quite reasonable. Unlike SIFS, no contour normals, nor tuned shape or lighting priors are needed for GVA. with the GVA term and about 1 minute without the GVA term. This is with unoptimized MATLAB code. We compare to the SIFS algorithm of [3] which is a subset of their algorithm that does not estimate albedo. We use their publicly released code. We initialize with an all zeros depth (corresponding to a flat surface) and the light is initialized to the mean light from the “natural” dataset in [3]. We perform the estimation in multiple scales using V-sweeps - solving at a coarse scale, upscaling, solving at a finer scale then downsampling the result, repeating the process 3 times. The same parameter settings were used in all cases3. We use the same error measures as in [3]. The error for the normals is measured using Median Angular Error (MAE) in radians. For the light, we take the resulting light coefficients and render a sphere lit by this light. We look for a DC shift which minimizes the distance between this image and the rendered ground truth light and shift the two images. Then the final error for the light is the L2 distance of the two images, normalized by the number of pixels. The error measure for depth Z used in [3] is quite sensitive to the absolute scaling of the results. We have decided to omit it from the main paper (even though our performance under this measure is much better than [3]). A summary of the results can be seen in Figure 2. The GVA term helps significantly in estimation results. This is especially true for light estimation. On average, our performance is similar to that of [3]. Our light estimation results are somewhat better, while our geometry estimation results are slightly poorer. It seems that [3] is somewhat overfit to the models in the MIT dataset. When tested on the models from [30], it gets poorer results. Figure 3 shows an example of the results we get, compared to that of SIFS [3], our algorithm with no GVA term, and the ground truth. As can be seen, the light we estimate is quite close to the ground truth. The geometry we estimate certainly captures the main structures of the ground truth. Even though we use no smoothness prior, the resulting mesh is acceptable - though a smoothness prior, such as the one used [3] would help significantly. The result by [3] misses a lot of the large 3We will make our code publicly available at http://dilipkay.wordpress.com/sfs/ 7 Ground Truth Ours - GVA SIFS Ours - No GVA Viewpoint 1 Viewpoint 2 Estimated Light Rendered Image Figure 4: Another example. Note how we manage to recover some of the dominant structure like the neck and feet, while SIFS mostly smooths features (albeit resulting in a more pleasing surface). scale structures of such as the hippo’s belly and feet, but it is certainly smooth and aesthetic. It is seen that without the GVA term, the resulting light is highly directed and the recovered shape has snake-like structures which precisely line up with the direction of the light. These are very specific local minima which satisfy the observed image well, in agreement with the results in [9]. Figure 4 shows some more results on a different model where the general story is similar. 7 Discussion In this paper, we have presented a shape and illumination from shading algorithm which makes use of the Generic View Assumption. We have shown how to utilize the GVA within an optimization framework. We achieve competitive results on shape and illumination estimation without the use of shape or illumination priors. The central message of our work is that the GVA can be a powerful regularizing term for the shape from shading problem. While priors for scene parameters can be very useful, balancing the effect of different priors can be hard and inferred results may be biased towards a wrong solution. One may ask: is the GVA just another prior? The GVA is a prior assumption, but a very reasonable one: it merely states that all viewpoints and lighting directions are equally likely. Nevertheless, there may exist multiple stable solutions and priors may be necessary to enable choosing between these solutions [16]. A classical example of this is the convex/concave ambiguity in shape and light. Future directions for this work are applying the GVA to more vision tasks, utilizing better optimization techniques and investigating the coexistence of priors and GVA terms. Acknowledgments This work was supported by NSF CISE/IIS award 1212928 and by the Qatar Computing Research Institute. We would like to thank Jonathan Yedidia for fruitful discussions. References [1] J. T. Barron and J. Malik. Color constancy, intrinsic images, and shape estimation. In Computer Vision– ECCV 2012, pages 57–70. Springer, 2012. 8 [2] J. T. Barron and J. Malik. Shape, albedo, and illumination from a single image of an unknown object. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 334–341. IEEE, 2012. [3] J. T. Barron and J. Malik. Shape, illumination, and reflectance from shading. Technical Report UCB/EECS-2013-117, EECS, UC Berkeley, May 2013. [4] J. Bento, N. Derbinsky, J. Alonso-Mora, and J. S. Yedidia. A message-passing algorithm for multi-agent trajectory planning. In Advances in Neural Information Processing Systems, pages 521–529, 2013. [5] T. O. Binford. Inferring surfaces from images. Artificial Intelligence, 17(1):205–244, 1981. [6] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends R⃝in Machine Learning, 3(1):1–122, 2011. [7] N. Derbinsky, J. Bento, V. Elser, and J. S. Yedidia. An improved three-weight message-passing algorithm. arXiv preprint arXiv:1305.1961, 2013. [8] R. T. Frankot and R. Chellappa. A method for enforcing integrability in shape from shading algorithms. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 10(4):439–451, 1988. [9] W. T. Freeman. Exploiting the generic viewpoint assumption. International Journal of Computer Vision, 20(3):243–261, 1996. [10] P. V. Gehler, C. Rother, M. Kiefel, L. Zhang, and B. Sch¨olkopf. Recovering intrinsic images with a global sparsity prior on reflectance. In NIPS, volume 2, page 4, 2011. [11] R. Grosse, M. K. Johnson, E. H. Adelson, and W. T. Freeman. Ground truth dataset and baseline evaluations for intrinsic image algorithms. In Computer Vision, 2009 IEEE 12th International Conference on, pages 2335–2342. IEEE, 2009. [12] D. D. Hoffman. Genericity in spatial vision. Geometric Representations of Perceptual Phenomena: Papers in Honor of Tarow indow on His 70th Birthday, page 95, 2013. [13] B. K. Horn. Obtaining shape from shading information. MIT press, 1989. [14] B. K. Horn and M. J. Brooks. The variational approach to shape from shading. Computer Vision, Graphics, and Image Processing, 33(2):174–208, 1986. [15] K. Ikeuchi and B. K. Horn. Numerical shape from shading and occluding boundaries. Artificial intelligence, 17(1):141–184, 1981. [16] A. D. Jepson. Comparing stories. Perception as Bayesian Inference, pages 478–488, 1995. [17] J. Kautz, P.-P. Sloan, and J. Snyder. Fast, arbitrary brdf shading for low-frequency lighting using spherical harmonics. In Proceedings of the 13th Eurographics workshop on Rendering, pages 291–296. Eurographics Association, 2002. [18] P. Kovesi. Shapelets correlated with surface normals produce surfaces. In Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on, volume 2, pages 994–1001. IEEE, 2005. [19] B. Kunsberg and S. W. Zucker. The differential geometry of shape from shading: Biology reveals curvature structure. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2012 IEEE Computer Society Conference on, pages 39–46. IEEE, 2012. [20] Y. Li and M. S. Brown. Single image layer separation using relative smoothness. CVPR, 2014. [21] J. Malik. Interpreting line drawings of curved objects. International Journal of Computer Vision, 1(1):73– 103, 1987. [22] K. Nakayama and S. Shimojo. Experiencing and perceiving visual surfaces. Science, 257(5075):1357– 1363, 1992. [23] A. P. Pentland. Linear shape from shading. International Journal of Computer Vision, 4(2):153–162, 1990. [24] N. Petrovic, I. Cohen, B. J. Frey, R. Koetter, and T. S. Huang. Enforcing integrability for surface reconstruction algorithms using belief propagation in graphical models. In Computer Vision and Pattern Recognition, 2001. CVPR 2001. Proceedings of the 2001 IEEE Computer Society Conference on, volume 1, pages I–743. IEEE, 2001. [25] R. Ramamoorthi and P. Hanrahan. An efficient representation for irradiance environment maps. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pages 497–500. ACM, 2001. [26] R. Ramamoorthi and P. Hanrahan. A signal-processing framework for inverse rendering. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pages 117–128. ACM, 2001. [27] M. Schmidt. Minfunc, 2005. [28] R. Szeliski. Computer vision: algorithms and applications. Springer, 2010. [29] Y. Weiss. Deriving intrinsic images from image sequences. In Computer Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE International Conference on, volume 2, pages 68–75. IEEE, 2001. [30] Y. Xiong, A. Chakrabarti, R. Basri, S. J. Gortler, D. W. Jacobs, and T. Zickler. From shading to local shape. http://arxiv.org/abs/1310.2916, 2014. [31] R. Zhang, P.-S. Tsai, J. E. Cryer, and M. Shah. Shape-from-shading: a survey. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 21(8):690–706, 1999. 9
|
2014
|
192
|
5,283
|
Asynchronous Anytime Sequential Monte Carlo Brooks Paige Frank Wood Department of Engineering Science University of Oxford Oxford, UK {brooks,fwood}@robots.ox.ac.uk Arnaud Doucet Yee Whye Teh Department of Statistics University of Oxford Oxford, UK {doucet,y.w.teh}@stats.ox.ac.uk Abstract We introduce a new sequential Monte Carlo algorithm we call the particle cascade. The particle cascade is an asynchronous, anytime alternative to traditional sequential Monte Carlo algorithms that is amenable to parallel and distributed implementations. It uses no barrier synchronizations which leads to improved particle throughput and memory efficiency. It is an anytime algorithm in the sense that it can be run forever to emit an unbounded number of particles while keeping within a fixed memory budget. We prove that the particle cascade provides an unbiased marginal likelihood estimator which can be straightforwardly plugged into existing pseudo-marginal methods. 1 Introduction Sequential Monte Carlo (SMC) inference techniques require blocking barrier synchronizations at resampling steps which limit parallel throughput and are costly in terms of memory. We introduce a new asynchronous anytime sequential Monte Carlo algorithm that has statistical efficiency competitive with standard SMC algorithms and has sufficiently higher particle throughput such that it is on balance more efficient per unit computation time. Our approach uses locally-computed decision rules for each particle that do not require block synchronization of all particles, instead only sharing of summary statistics with particles that follow. In our algorithm each resampling point acts as a queue rather than a barrier: each particle chooses the number of its own offspring by comparing its own weight to the weights of particles which previously reached the queue, blocking only to update summary statistics before proceeding. An anytime algorithm is an algorithm that can be run continuously, generating progressively better solutions when afforded additional computation time. Traditional particle-based inference algorithms are not anytime in nature; all particles need to be propagated in lock-step to completion in order to compute expectations. Once a particle set runs to termination, inference cannot straightforwardly be continued by simply doing more computation. The na¨ıve strategy of running SMC again and merging the resulting sets of particles is suboptimal due to bias (see [12] for explanation). Particle Markov chain Monte Carlo methods (i.e. particle Metropolis Hastings and iterated conditional sequential Monte Carlo (iCSMC) [1]) for correctly merging particle sets produced by additional SMC runs are closer to anytime in nature but suffer from burstiness as big sets of particles are computed then emitted at once and, fundamentally, the inner-SMC loop of such algorithms still suffers the kind of excessive synchronization performance penalty that the particle cascade directly avoids. Our asynchronous SMC algorithm, the particle cascade, is anytime in nature. The particle cascade can be run indefinitely, without resorting to merging of particle sets. 1.1 Related work Our algorithm shares a superficial similarity to Bernoulli branching numbers [5] and other search and exploration methods used for particle filtering, where each particle samples some number of 1 children to propagate to the next observation. Like the particle cascade, the total number of particles which exist at each generation is allowed to gradually increase and decrease. However, computing branching correction numbers is generally a synchronous operation, requiring all particle weights to be known in order to choose an appropriate number of offspring; nor are these methods anytime. Sequentially interacting Markov chain Monte Carlo [2] is an anytime algorithm, which although conceptually similar to SMC has different synchronization properties. Parallelizing the resampling step of sequential Monte Carlo methods has drawn increasing recent interest as the effort progresses to scale up algorithms to take advantage of high-performance computing systems and GPUs. Removing the global collective resampling operation [9] is a particular focus for improving performance. Running arbitrarily many particles within a fixed memory budget can also be addressed by tracking random number seeds used to generate proposals, allowing particular particles to be deterministically “replayed” [7]. However, this approach is not asynchronous nor anytime. 2 Background We begin by briefly reviewing sequential Monte Carlo as generally formulated on state-space models. Suppose we have a non-Markovian dynamical system with latent random variables X0, . . . , XN and observed random variables Y0, . . . , YN described by the joint density p(xn|x0:n−1, y0:n−1) = f(xn|x0:n−1) p(yn|x0:n, y0:n−1) = g(yn|x0:n), (1) where X0 is drawn from some initial distribution µ(·), and f and g are conditional densities. Given observed values Y0:N = y0:N, the posterior distribution p(x0:n|y0:n) is approximated by a weighted set of K particles, with each particle k denoted Xk 0:n for k = 1, . . . , K. Particles are propagated forward from proposal densities q(xn|x0:n−1) and re-weighted at each n = 1, . . . , N Xk n|Xk 0:n−1 ∼q(xn|Xk 0:n−1) (2) wk n = g(yn|Xk 0:n)f(Xk n|Xk 0:n−1) q(Xkn|Xk 0:n−1) (3) W k n = W k n−1wk n, (4) where wk n is the weight associated with observation yn and W k n is the unnormalized weight of particle k after observation n. It is assumed that exact evaluation of p(x0:N|y0:N) is intractable and that the likelihoods g(yn|Xk 0:n) can be evaluated pointwise. In many complex dynamical systems, or in black-box simulation models, evaluation of f(Xk n|Xk 0:n−1) may be prohibitively costly or even impossible. As long as one is capable of simulating from the system, the proposal distribution can be chosen as q(·) ≡f(·), in which case the particle weights are simply wk n = g(yn|Xk 0:n), eliminating the need to compute the densities f(·). The normalized particle weights ¯ωk n = W k n/PK j=1 W j n are used to approximate the posterior ˆp(x0:n|y0:n) ≈ K X k=1 ¯ωk nδXk 0:n(x0:n). (5) In the very simple sequential importance sampling setup described here, the marginal likelihood can be estimated by ˆp(y0:n) = 1 K PK k=1 W k n. 2.1 Resampling and degeneracy The algorithm described above suffers from a degeneracy problem wherein most of the normalized weights ¯ω1 n, . . . , ¯ωK n become very close to zero for even moderately large n. Traditionally this is combated by introducing a resampling step: as we progress from n to n + 1, particles with high weights are duplicated and particles with low weights are discarded, preventing all the probability mass in our approximation to the posterior from accumulating on a single particle. A resampling 2 scheme is an algorithm for selecting the number of offspring particles M k n+1 that each particle k will produce after stage n. Many different schemes for resampling particles exist; see [6] for an overview. Resampling changes the weights of particles: as the system progresses from n to n + 1, each of the M k n+1 children are assigned a new weight V k n+1, replacing the previous weight W k n prior to resampling. Most resampling schemes generate an unweighted set of particles with V k n+1 = 1 for all particles. When a resampling step is added at every n, the marginal likelihood can be estimated by ˆp(y0:n) = Qn i=0 1 K PK k=1 wk i ; this estimate of the marginal likelihood is unbiased [8]. 2.2 Synchronization and limitations Our goal is to scale up to very large numbers of particles, using a parallel computing architecture where each particle is simulated as a separate process or thread. In order to resample at each n we must compute the normalized weights ¯ωk n, requiring us to wait until all individual particles have both finished forward simulation and computed their individual weight W k n before the normalization and resampling required for any to proceed. While the forward simulation itself is trivially parallelizable, the weight normalization and resampling step is a synchronous, collective operation. In practice this can lead to significant underuse of computing resources in a multiprocessor environment, hindering our ability to scale up to large numbers of particles. Memory limitations on finite computing hardware also limit the number of simultaneous particles we are capable of running in practice. All particles must move through the system together, simultaneously; if the total memory requirements of particles is greater than the available system RAM, then a substantial overhead will be incurred from swapping memory contents to disk. 3 The Particle Cascade The particle cascade algorithm we introduce addresses both these limitations: it does not require synchronization, and keeps only a bounded number of particles alive in the system at any given time. Instead of resampling, we will consider particle branching, where each particle may produce 0 or more offspring. These branching events happen asynchronously and mutually exclusively, i.e. they are processed one at a time. 3.1 Local branching decisions At each stage n of sequential Monte Carlo, particles process observation yn. Without loss of generality, we can define an ordering on the particles 1, 2, . . . in the order they arrive at yn. We keep track of the running average weight W k n of the first k particles to arrive at observation yn in an online manner W k n = W k n for k = 1, (6) W k n = k −1 k W k−1 n + 1 k W k n for k = 2, 3, . . . . (7) The number of children of particle k depends on the weight W k n of particle k relative to those of other particles. Particles with higher relative weight are more likely to be located in a high posterior probability part of the space, and should be allowed to spawn more child particles. In our online asynchronous particle system we do not have access to the weights of future particles when processing particle k. Instead we will compare W k n to the current average weight W k n among particles processed thus far. Specifically, the number of children, which we denote by M k n+1, will depend on the ratio Rk n = W k n W kn . (8) Each child of particle k will be assigned a weight V k n+1 such that the total weight of all children M k n+1V k n+1 has expectation W k n. There is a great deal of flexibility available in designing a scheme for choosing the number of child particles; we need only be careful to set V k n+1 appropriately. Informally, we would like M k n+1 to 3 be large when Rk n is large. If M k n+1 is sampled in such a way that E[M k n+1] = Rk n, then we set the outgoing weight V k n+1 = W k n. Alternatively, if we are using a scheme which deterministically guarantees M k n+1 > 0, then we set V k n+1 = W k n/M k n+1. A simple approach would be to sample M k n+1 independently conditioned on the weights. In such schemes we could draw each M k n+1 from some simple distribution, e.g. a Poisson distribution with mean Rk n, or a discrete distribution over the integers {⌊Rk n⌋, ⌈Rk n⌉}. However, one issue that arises in such approaches where the number of children for each particle is conditionally independent is that the variance of the total number of particles at each generation can grow faster than desirable. Suppose we start the system with K0 particles. The number of particles at subsequent stages n is given recursively as Kn = PKn−1 k=1 M k n. We would like to avoid situations in which the number of particles becomes too large, or collapses to 1. Instead, we will allow M k n to depend on the number of children of previous particles at n, in such a way that we can stabilize the total number of particles in each generation. Suppose that we wish for the number of particles to be stabilized around K0. After k −1 particles have been processed, we expect the total number of children produced at that point to be approximately k −1, so that if the number is less than k −1 we should allow particle k to produce more children, and vice versa. Similarly, if we already currently have more than K0 children, we should allow particle k to produce fewer children. We use a simple scheme which satisfies these criteria, where the number of particles is chosen at random when Rk n < 1, and set deterministically when Rk n ≥1 (M k n+1, V k n+1) = (0, 0) w.p. 1 −Rk n, if Rk n < 1; (1, W k n) w.p. Rk n, if Rk n < 1; (⌊Rk n⌋, W k n ⌊Rkn⌋) if Rk n ≥1 and Pk−1 j=1 M j n+1 > min(K0, k −1); (⌈Rk n⌉, W k n ⌈Rkn⌉) if Rk n ≥1 and Pk−1 j=1 M j n+1 ≤min(K0, k −1). (9) As the number of particles becomes large, the estimated average weight closely approximates the true average weight. Were we to replace the deterministic rounding with a Bernoulli(Rk n −⌊Rk n⌋) choice between {⌊Rk n⌋, ⌈Rk n⌉}, then this decision rule defines the same distribution on the number of offspring particles M k n+1 as the well-known systematic resampling procedure [3, 9]. Note the anytime nature of this algorithm — any given particle passing through the system needs only the running average W k n and the preceding child particle counts Pk−1 j=1 M j n+1 in order to make local branching decisions, not the previous particles themselves. Thus it is possible to run this algorithm for some fixed number of initial particles K0, inspect the output of the completed particles which have left the system, and decide whether to continue by initializing additional particles. 3.2 Computing expectations and marginal likelihoods Samples drawn from the particle cascade can be used to compute expectations in the same manner as usual; that is, given some function ϕ(·), we normalize weights ¯ωk n = W k n/PKn j=1 W j n and approximate the posterior expectation by E[ϕ(X0:n)|y0:n] ≈PKn k=1 ¯ωk nϕ(Xk 0:n). We can also use the particle cascade to define an estimator of the marginal likelihood p(y0:n), ˆp(y0:n) = 1 K0 Kn X k=1 W k n. (10) The form of this estimate is fairly distinct from the standard SMC estimators in Section 2. One can think of ˆp(y0:n) as ˆp(y0:n) = ˆp(y0) Qn i=1 ˆp(yi|y0:i−1) where ˆp(y0) = 1 K0 K0 X k=1 W k 0 , ˆp(yn|y0:n−1) = PKn k=1 W k n PKn−1 k=1 W k n−1 for n ≥1. (11) Note that the incrementally updated running averages W k n are very directly tied to the marginal likelihood estimate; that is, ˆp(y0:n) = Kn K0 W k n. 4 3.3 Theoretical properties, unbiasedness, and consistency Under weak assumptions we can show that the marginal likelihood estimator ˆp(y0:n) defined in Eq. 10 is unbiased, and that both its variance and L2 errors of estimates of reasonable posterior expectations decrease in the number of particle initializations as 1/K0. Note that because the cascade is an anytime algorithm K0 may be increased simply, without restarting inference. Detailed proofs are given in the supplemental material; statements of the results are provided here. Denote by B(E) the space of bounded real-valued functions on a space E, and suppose each Xn is an X-valued random variable. Assume the Bernoulli(Rk n −⌊Rk n⌋) version of the resampling rule in Eq. 9, and further assume that g(yn|·, y0:n−1) : X n+1 →R is in B(X n+1) and strictly positive. Finally assume that the ordering in which particles arrive at each n is a random permutation of the particle index set, conditions which we state precisely in the supplemental material. Then the following propositions hold: Proposition 1 (Unbiasedness of marginal likelihood estimate) For any K0 ≥1 and n ≥0 E [ˆp(y0:n)] = p(y0:n). (12) Proposition 2 (Variance of marginal likelihood estimate) For any n ≥0, there exists a constant an < ∞such that for any K0 ≥1 V [ˆp(y0:n)] ≤an K0 . (13) Proposition 3 (L2 error bounds) For any n ≥0, there exists a constant an < ∞such that for any K0 ≥1 and any ψn ∈B X n+1 E ( Kn X k=1 ¯ωk nψn(Xk 0:n) ! − Z p(dx0:n|y0:n)ψn(x0:n) )2 ≤an K0 ∥ψn∥2 . (14) Additional results and proofs can be found in the supplemental material. 4 Active bounding of memory usage In an idealized computational environment, with infinite available memory, our implementation of the particle cascade could begin by launching (a very large number) K0 particles simultaneously which then gradually propagate forward through the system. In practice, only some finite number of particles, probably much smaller than K0, can be simultaneously simulated efficiently. Furthermore, the initial particles are not truly launched all at once, but rather in a sequence, introducing a dependency in the order in which particles arrive at each observation n. Our implementation of the particle cascade addresses these issues by explicitly injecting randomness into the execution order of particles, and by imposing a machine-dependent hard cap on the number of simultaneous extant processes. This permits us to run our particle filter system indefinitely, for arbitrarily large and, in fact, growing initial particle counts K0, on fixed commodity hardware. Each particle in our implementation runs as an independent operating system process [11]. In order to efficiently run a large number of particles, we impose a hard limit ρ on the total number of particles which can simultaneously exist in the particle system; most of these will generally be sleeping processes. The ideal choice for this number will vary based on hardware capabilities, but in general should be made as large as possible. Scheduling across particles is managed via a global first-in random-out process queue of length ρ; this can equivalently be conceptualized as a random-weight priority queue. Each particle corresponds to a single live process, augmented by a single additional control process which is responsible only for spawning additional initial particles (i.e. incrementing the initial particle count K0). When any particle k arrives at any likelihood evaluation n, it computes its target number of child particles M k n+1 and outgoing particle weight V k n+1. If M k n+1 = 0 it immediately terminates; otherwise it enters the queue. Once this particle either enters the queue or terminates, some other process 5 101 102 103 104 105 10-4 10-3 10-2 10-1 100 MSE 101 102 103 104 105 HMM: # of particles −180 −160 −140 −120 log ^p(y0 :N) 101 102 103 104 105 10-1 100 101 102 SMC Particle Cascade No resampling iCSMC 101 102 103 104 105 Linear Gaussian: # of particles −130 −120 −110 −100 −90 −80 True value SMC Particle Cascade No resampling Figure 1: All results are reported over multiple independent replications, shown here as independent lines. (top) Convergence of estimates to ground truth vs. number of particles, shown as (left) MSE of marginal probabilities of being in each state for every observation n in the HMM, and (right) MSE of the latent expected position in the linear Gaussian state space model. (bottom) Convergence of marginal likelihood estimates to the ground truth value (marked by a red dashed line), for (left) the HMM, and (right) the linear Gaussian model. continues execution — this process is chosen uniformly at random, and as such may be a sleeping particle at any stage n < N, or it may instead be the control process which then launches a new particle. At any given time, there are some number of particles Kρ < ρ currently in the queue, and so the probability of resuming any particular individual particle, or of launching a new particle, is 1/(Kρ + 1). If the particle released from the queue has exactly one child to spawn, it advances to the next observation and repeats the resampling process. If, however, a particle has more than one child particle to spawn, rather than launching all child particles at once it launches a single particle to simulate forward, decrements the total number of particles left to launch by one, and itself re-enters the queue. The system is initialized by seeding the system with a number of initial particles ρ0 < ρ at n = 0, creating ρ0 active initial processes. The ideal choice for the process count constraint ρ may vary across operating systems and hardware. In the event that the process count is fully saturated (i.e. the process queue is full), then we forcibly prevent particles from duplicating themselves and creating new children. If we release a particle from the queue which seeks to launch m > 1 additional particles when the queue is full, we instead collapse all the remaining particles into a single particle; this single particle represents a virtual set of particles, but does not create a new process and requires no additional CPU or memory resources. We keep track of a particle count multiplier Ck n that we propagate forward along with the particle. All particles are initialized with Ck 0 = 1, and then when a particle collapse takes place, update their multiplier at n + 1 to mCk n. This affects the way in which running weight averages are computed; suppose a new particle k arrives with multiplier Ck n and weight W k n. We incorporate all these values into the average weight immediately, and update W k n taking into account the multiplicity, with W k n = k −1 k + Ckn −1W k−1 n + Ck n k + Ckn −1W k n for k = 2, 3, . . .. (15) This does not affect the computation of the ratio Rk n. We preserve the particle multiplier, until we reach the final n = N; then, after all forward simulation is complete, we re-incorporate the particle multiplicity when reporting the final particle weight W k N = Ck NV k Nwk N. 5 Experiments We report experiments on performing inference in two simple state space models, each with N = 50 observations, in order to demonstrate the overall validity and utility of the particle cascade algorithm. 6 100 101 102 103 10-4 10-3 10-2 10-1 100 MSE 100 101 102 103 HMM: Time (seconds) −180 −160 −140 −120 log ^p(y0 :N) 100 101 102 103 10-1 100 101 102 SMC Particle Cascade No resampling iCSMC 100 101 102 103 Linear Gaussian: Time (seconds) −130 −120 −110 −100 −90 −80 True value SMC Particle Cascade No resampling Figure 2: (top) Comparative convergence rates between SMC alternatives including our new algorithm, and (bottom) estimation of marginal likelihood, by time. Results are shown for (left) the hidden Markov model, and (right) the linear Gaussian state space model. The first is a hidden Markov model (HMM) with 10 latent discrete states, each with an associated Gaussian emission distribution; the second a one-dimensional linear Gaussian model. Note that using these models means that we can compute posterior marginals at each n and the marginal likelihood Z = p(y0:N) exactly. 2 4 8 16 32 # of cores 0 5 10 15 20 25 30 35 40 Time per sample (ms) Particle Cascade No Resampling Iterated CSMC SMC Figure 3: Average time to draw a single complete particle on a variety of machine architectures. Queueing rather than blocking at each observation improves performance, and appears to improve relative performance even more as the available compute resources increase. Note that this plot shows only average time per sample, not a measure of statistical efficiency. The high speed of the non-resampling algorithm is not sufficient to make it competitive with the other approaches. These experiments are not designed to stresstest the particle cascade; rather, they are designed to show that performance of the particle cascade closely approximates that of fully synchronous SMC algorithms, even in a small-data small-complexity regime where we expect their performance to be very good. In addition to comparing to standard SMC, we also compare to a worst-case particle filter in which we never resample, instead propagating particles forward deterministically with a single child particle at every n. While the statistical (per-sample) efficiency of this approach is quite poor, it is fully parallelizable with no blocking operations in the algorithm at all, and thus provides a ceiling estimate of the raw sampling speed attainable in our overall implementation. We also benchmark against what we believe to be the most practically competitive similar approach, iterated conditional SMC [1]. Iterated conditional SMC corresponds to the particle Gibbs algorithm in the case where parameter values are known; by using a particle filter sweep as a step within a larger MCMC algorithm, iCSMC provides a statistically valid approach to sampling from a posterior distribution by repeatedly running sequential Monte Carlo sweeps each with a fixed number of particles. One downside to iCSMC is that it does not provide an estimate of the marginal likelihood. In all benchmarks, we propose from the prior distribution, with q(xn|·) ≡f(xn|x0:n−1); the SMC and iCSMC benchmarks use a multinomial resampling scheme. On both these models we see the statistical efficiency of the particle cascade is approximately in line with synchronous SMC, slightly outperforming the iCSMC algorithm and significantly outperform7 ing the fully parallelized non-resampling approach. This suggests that the approximations made by computing weights at each n based on only the previously observed particles, and the total particle count limit imposed by ρ, do not have an adverse effect on overall performance. In Fig. 1 we plot convergence per particle to the true posterior distribution, as well as convergence in our estimate of the normalizing constant. 5.1 Performance and scalability Although values will be implementation-dependent, we are ultimately interested not in per-sample efficiency but rather in our rate of convergence over time. We record wall clock time for each algorithm for both of these models; the results for convergence of our estimates of values and marginal likelihood are shown in Fig. 2. These particular experiments were all run on Amazon EC2, in an 8-core environment with Intel Xeon E5-2680 v2 processors. The particle cascade provides a much faster and more accurate estimate of the marginal likelihood than the competing methods, in both models. Convergence in estimates of values is quick as well, faster than the iCSMC approach. We note that for very small numbers of particles, running a simple particle filter is faster than the particle cascade, despite the blocking nature of the resampling step. This is due to the overhead incurred by the particle cascade in sending an initial flurry of ρ0 particles into the system before we see any particles progress to the end; this initial speed advantage diminishes as the number of samples increases. Furthermore, in stark contrast to the simple SMC method, there are no barriers to drawing more samples from the particle cascade indefinitely. On this fixed hardware environment, our implementation of SMC, which aggressively parallelizes all forward particle simulations, exhibits a dramatic loss of performance as the number of particles increases from 104 to 105, to the point where simultaneously running 105 particles is simply not possible in a feasible amount of time. We are also interested in how the particle cascade scales up to larger hardware, or down to smaller hardware. A comparison across five hardware configurations is shown in Fig. 3. 6 Discussion The particle cascade has broad applicability to all SMC and particle filtering inference applications. For example, constructing an appropriate sequence of densities for SMC is possible in arbitrary probabilistic graphical models, including undirected graphical models; see e.g. the sequential decomposition approach of [10]. We are particularly motivated by the SMC-based probabilistic programming systems that have recently appeared in the literature [13, 11]. Both suggested that the primary performance bottleneck in their inference algorithms was barrier synchronization, something we have done away with entirely. What is more, while particle MCMC methods are particularly appropriate when there is a clear boundary that can be exploited between between parameters of interest and nuisance state variables, in probabilistic programming in particular, parameter values must be generated as part of the state trajectory itself, leaving no explicitly denominated latent parameter variables per se. The particle cascade is particularly relevant in such situations. Finally, as the particle cascade yields an unbiased estimate of the marginal likelihood it can be plugged directly into PIMH, SMC2 [4], and other existing pseudo-marginal methods. Acknowledgments Yee Whye Teh’s research leading to these results has received funding from EPSRC (grant EP/K009362/1) and the ERC under the EU’s FP7 Programme (grant agreement no. 617411). Arnaud Doucet’s research is partially funded by EPSRC (grants EP/K009850/1 and EP/K000276/1). Frank Wood is supported under DARPA PPAML through the U.S. AFRL under Cooperative Agreement number FA8750-14-2-0004. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation heron. The views and conclusions contained herein are those of the authors and should be not interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA, the U.S. Air Force Research Laboratory or the U.S. Government. 8 References [1] Christophe Andrieu, Arnaud Doucet, and Roman Holenstein. Particle Markov chain Monte Carlo methods. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 72(3):269–342, 2010. [2] Anthony Brockwell, Pierre Del Moral, and Arnaud Doucet. Sequentially interacting Markov chain Monte Carlo methods. Annals of Statistics, 38(6):3387–3411, 2010. [3] James Carpenter, Peter Clifford, and Paul Fearnhead. An improved particle filter for non-linear problems. Radar, Sonar and Navigation, IEE Proceedings -, 146(1):2–7, Feb 1999. [4] Nicolas Chopin, Pierre E Jacob, and Omiros Papaspiliopoulos. SMC2: an efficient algorithm for sequential analysis of state space models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 75(3):397–426, 2013. [5] D. Crisan, P. Del Moral, and T. Lyons. Discrete filtering using branching and interacting particle systems. Markov Process. Related Fields, 5(3):293–318, 1999. [6] Randal Douc, Olivier Capp´e, and Eric Moulines. Comparison of resampling schemes for particle filtering. In In 4th International Symposium on Image and Signal Processing and Analysis (ISPA), pages 64–69, 2005. [7] Seong-Hwan Jun and Alexandre Bouchard-Cˆot´e. Memory (and time) efficient sequential monte carlo. In Proceedings of the 31st International Conference on Machine Learning, 2014. [8] Pierre Del Moral. Feynman-Kac Formulae – Genealogical and Interacting Particle Systems with Applications. Probability and its Applications. Springer, 2004. [9] Lawrence M. Murray, Anthony Lee, and Pierre E. Jacob. Parallel resampling in the particle filter. arXiv preprint arXiv:1301.4019, 2014. [10] Christian A. Naesseth, Fredrik Lindsten, and Thomas B. Sch¨on. Sequential Monte Carlo for Graphical Models. In Advances in Neural Information Processing Systems 27. 2014. [11] Brooks Paige and Frank Wood. A compilation target for probabilistic programming languages. In Proceedings of the 31st International Conference on Machine learning, 2014. [12] Nick Whiteley, Anthony Lee, and Kari Heine. On the role of interaction in sequential Monte Carlo algorithms. arXiv preprint arXiv:1309.2918, 2013. [13] Frank Wood, Jan Willem van de Meent, and Vikash Mansinghka. A new approach to probabilistic programming inference. In Proceedings of the 17th International conference on Artificial Intelligence and Statistics, 2014. 9
|
2014
|
193
|
5,284
|
Sparse space-time deconvolution for Calcium image analysis Ferran Diego Fred A. Hamprecht Heidelberg Collaboratory for Image Processing (HCI) Interdisciplinary Center for Scientific Computing (IWR) University of Heidelberg, Heidelberg 69115, Germany {ferran.diego,fred.hamprecht}@iwr.uni-heidelberg.de Abstract We describe a unified formulation and algorithm to find an extremely sparse representation for Calcium image sequences in terms of cell locations, cell shapes, spike timings and impulse responses. Solution of a single optimization problem yields cell segmentations and activity estimates that are on par with the state of the art, without the need for heuristic pre- or postprocessing. Experiments on real and synthetic data demonstrate the viability of the proposed method. 1 Introduction A detailed understanding of brain function is a still-elusive grand challenge. Experimental evidence is collected mainly by electrophysiology and “Calcium imaging”. In the former, multi-electrode array recordings allow the detailed study of hundreds neurons, while field potentials reveal the collective action of dozens or hundreds of neurons. The more recent Calcium imaging, on the other hand, is a fluorescent microscopy technique that allows the concurrent monitoring of the individual actions of thousands of neurons at the same time. While its temporal resolution is limited by the chemistry of the employed fluorescent markers, its great information content makes Calcium imaging an experimental technique of first importance in the study of neural processing, both in vitro [16, 6] and in vivo [5, 7]. However, the acquired image sequences are large, and in laboratory practice the analysis remains a semi-manual, tedious and subjective task. Calcium image sequences reveal the activity of neural tissue over time. Whenever a neuron fires, its fluorescence signal first increases and then decays in a characteristic time course. Evolutionary and energetic constraints on the brain guarantee that, in most cases, neural activity is sparse in both space (only a fraction of neurons fire at a given instant) and time (most neurons fire only intermittently). The problem setting can be formalized as follows: given an image sequence as input, the desired output is (i) a set of cells1 and (ii) a set of time points at which these cells were triggered. We here propose an unsupervised learning formulation and algorithm that leverages the known structure of the data to produce the sparsest representations published to date, and allow for meaningful automated analysis. 1.1 Prior Art Standard laboratory practice is to delineate each cell manually by a polygon, and then integrate their fluorescence response over the polygon, for each point in time. The result is a set of time series, one per cell. 1Optical sectioning by techniques such as confocal or two-photon microscopy implies that we see only parts of a neuron, such as a slice through its cell body or a dendrite, in an image plane. For brevity, we simply refer to these as “cells” in the following. 1 a) Matrix factorization [13, 15, 4, 3, 12] b) Convolutional sparse coding [8, 25, 20, 17, 14] Figure 1: Sketch of selected previous work. Left: Decomposition of an image sequence into a sum of K components. Each component is given by the Cartesian product of a spatial component or basis image Dk and its temporal evolution uk. In this article, we represent such Cartesian products by the convolution of multidimensional arrays. Right: Description of a single image in terms of a sum of latent feature maps Dk convolved with filters Hk Given that the fluorescence signal impulse response to a stimulus is stereotypic, these time series can then be deconvolved to obtain a sparse temporal representation for each cell using nonnegative sparse deconvolution [24, 5, 10]. The problem of automatically identifying the cells has received much less attention, possibly due to the following difficulties [16, 23]: i) low signal-to-noise ratio (SNR); ii) large variation in luminance and contrast; iii) heterogeneous background; iv) partial occlusion and v) pulsations due to heartbeat or breathing in live animals. Existing work either hard-codes prior knowledge on the appearance of specific cell types [16, 22] or uses supervised learning (pixel and object level classification, [23]) or unsupervised learning (convolutional sparse block coding, [14]). Closest in spirit to our work are attempts to simultaneously segment the cells and estimate their time courses. This is accomplished by matrix factorization techniques such as independent component analysis [13], nonnegative matrix factorization [12] and (hierarchical) dictionary learning [4, 3]. However, none of the above give results that are truly sparse in time; and all of the above have to go to some lengths to obtain reasonable cell segmentations: [13, 4] recur to heuristic post-processing, while [3] invokes structured sparsity inducing norms and [15] use an additional clustering step as initialization. All these extra steps are necessary to assure that each spatial component represents exactly one cell. In terms of mathematical modeling, we build on recent advances and experiments in convolutional sparse coding such as [8, 25, 20, 17, 14]. Ref. [21] already applies convolutional sparse coding to video, but achieves sparsity only in space and not in time (where only pairs of frames are used to learn latent representations). Refs. [19, 18] consider time series which they deconvolve, without however using (or indeed needing, given their data) a sparse spatial representation. 1.2 Contributions Summarizing prior work, we see three strands: i) Fully automated methods that require an extrinsic cell segmentation, but can find a truly2 sparse representation of the temporal activity. ii) Fully automated methods that can detect and segment cells, but do not estimate time courses in the same framework. iii) Techniques that both segment cells and estimate their time courses. Unfortunately, existing techniques either produce temporal representations that are not truly sparse [12, 4, 3] or do not offer a unified formulation of segmentation and activity detection that succeeds without extraneous clustering steps [15]. In response, we offer the first unified formulation in terms of a single optimization problem: its solution simultaneously yields all cells along with their actions over time. The representation of activity is truly sparse, ideally in terms of a single nonzero coefficient for each distinct action of a cell. This is accomplished by sparse space-time deconvolution. Given a motion-corrected sequence of Calcium images, it estimates i) locations of cells and ii) their activity along with iii) typical cell shapes and iv) typical impulse responses. Taken together, these ingredients afford the sparsest, and thus hopefully most interpretable, representation of the raw data. In addition, our joint formulation 2We distinguish a sparse representation, in which the estimated time course of a cell has many zeros; and a “truly sparse” representation in which a single action of a cell is ideally represented in terms of a single nonzero coefficient. 2 Figure 2: Summary of sparse space-time deconvolution. Top: Unified formulation in terms of a single optimization problem. Bottom: Illustration on tiny subset of data. Left: raw data. The fluorescence level to be estimated is heavily degraded by Poisson shot noise that is unavoidable at the requisite short exposure times. Middle: smoothed raw data. Right: approximation of the data in terms of a Cartesian product of estimated cell shapes and temporal activities. Each temporal activity is further decomposed as a convolution of estimated impulse responses and very few nonzero coefficients. allows to estimate a nonuniform and temporally variable background. Experiments on difficult artificial and real-world data show the viability of the proposed formulation. Notation Boldface symbols describe multidimensional arrays. We define A ∗B as a convolution of multidimensional arrays A and mirror(B), with the result trimmed to the dimensions of A. Here, the “mirror” operation flips a multidimensional array along every dimension. A ⊛B is the full convolution result of multidimensional arrays A and mirror(B). These definitions are analogous to the “convn” command in matlab with shape arguments “same” and “full”, respectively. ∥·∥0 counts the number of non-zero coeficients, and ∥· ∥F is the Frobenius norm. 2 Sparse space-time deconvolution (SSTD) 2.1 No background subtraction An illustration of the proposed formulation is given in Fig. 2 and our notation is summarized in Table. 1. We seek to explain image sequence X in terms of up to K cells and their activity over time. In so doing, all cells are assumed to have exactly one (Eq. 1.1) of J << K possible appearances, and to reside at a unique location (Eq. 1.1). These cell locations are encoded in K latent binary feature maps. The activity of each cell is further decomposed in terms of a convolution of impulses (giving the precise onset of each burst) with exactly one of L << K types of impulse responses. A single cell may “use” different impulse responses at different times, but just one type at any one time ((Eq. 1.2). All of the above is achieved by solving the following optimization problem: min D,H,f,s X − K X k=1 J X j=1 Dk,j ∗Hj ⊛ L X l=1 sk,l ∗fl ! 2 F (1) 3 such that Constraint Interpretation P j ∥Dk,j∥0 ≤1, ∀k (1.1) at most one location and appearance per component P l ∥st,k,l∥0 ≤1, ∀k, t (1.2) only one type of activation at each time per cell ∥Hj∥2 F ≤1, ∀j (1.3) prevent cell appearance from becoming large ∥fl∥2 2 ≤1, ∀l (1.4) prevent impulse filter from becoming large Here, the optimization is with respect to the cell detection maps D, cell appearances H, activity patterns or impulse responses f as well as “truly sparse” activity indicator vectors s. This optimization is subject to the two constraints mentioned earlier plus upper bounds on the norm of the learned filters. The user needs to select the following parameters: an upper bound K on the number of cells as well as the size in pixels H of their matched filters / convolution kernels H; upper bounds J on the number of different appearances and L on the number of different activity patterns that cells may have; as well as the number of coefficients F that the learned impulse responses may have. Considering that we propose a method for both cell detection and sparse time course estimation, the number of six user-adjustable parameters compares favourably to previous work. Methods that decouple these steps typically need more parameters altogether, and the heuristics that prior work on joint optimization uses also have a large number of (implicit) parameters. While many other approximations such as PK k=1 Dk ⊛sk ∗fk or PK k=1 PJ j=1 Dk,j ∗Hj ⊛sk,j ∗fj are conceivable and may make sense in other applications areas, the proposed formulation is the most parsimonious of its kind. Indeed, it uses a small pool of J shapes and L firing patterns, which can be combined freely, to represent all cells and their activities. It is owing to this fact that we dub the method sparse spatio-temporal deconvolution (SSTD). 2.2 SSTD with background subtraction In actual experiments, the observed fluorescence level is a sum of the signal of interest plus a nuisance background signal. This background is typically nonuniform in the spatial domain and, while it can be modeled as constant over time [15, 24], is often also observed to vary over time, prompting robust local normalization as a preprocessing step [7, 4]. Here, we generalize the formulation from (1) to correct for a background that is assumed to be spatially smooth and time-varying. In more detail, we model the background in terms of the direct product Bs ⊛bt of a spatial component Bs ∈RM×N×1 + and a time series bt ∈R1×1×T + . Insights into the physics and biology of Calcium imaging suggest that (except for saturation regimes characterized by high neuron firing rates), it is reasonable to assume that the normalized quantity (observed fluorescence minus background) divided by background, typically dubbed ∆F/F0, is linearly related to the intracellular Calcium concentration [24, 10]. In keeping with this notion, we now propose our final model, viz. min D,H,f,s,Bs,bt
X − K X k=1 J X j=1 Dk,j ∗Hj ⊛ L X l=1 sk,l ∗fl ! −Bs ⊛bt ⊘ Bs ⊛bt
2 F + λ∥Bs∥T V such that (1.1) −(1.4), Bs > 0, bt > 0 (2) with “⊘” denoting an elementwise division. Note that the optimization now also runs over the spatial and temporal components of the background, with the total variation (TV) regularization term3 enforcing spatial smoothness of the spatial background component [2]. In addition to the previously defined parameters, the user also needs to select parameter λ which determines the smoothness of the background estimate. 2.3 Optimization The optimization problem in (2) is convex in either the spatial or the temporal filters H, f alone when keeping all other unknowns fixed; but it is nonconvex in general. In our experiments, we use a block coordinate descent strategy [1, Section 2.7] that iteratively optimizes one group of variables while 3TV measures the sum of the absolute values of the spatial gradient. 4 Symbol Definition X ∈RM×N×T + image sequence of length T, each image is M × N K ∈N+ number of cells J ∈N+ number of distinct cell appearances Hj ∈RH×H×1 + jth cell appearance / spatial filter / matched filter of size H × H Dk,j ∈{0, 1}M×N×1 indicator matrix of the kth cell for the jth cell appearance L ∈N+ number of distinct impulse responses / activity patterns fl ∈R1×1×F + lth impulse response of length F sk,l ∈R1×1×T + indicator vector of the kth spike train for the lth impulse response Table 1: Notation fixing all others (see supplementary material for details). The nonconvex l0-norm constraints require that cell centroids D and spike trains s are estimated by techniques such as convolutional matching pursuit [20]; while the spatio-temporal filters can be learned using simpler gradient descent [25], K-SVD [20] or simple algebraic expressions. All unknowns are initialized with standard Gaussian noise truncated to nonnegative values. The limiting number of cells K can be set to a generous upper bound of the expected true number because spatial components without activity are automatically set to zero during optimization. 3 Experimental Setup This section describes the data and algorithms used for experiments and benchmarks. 3.1 Inferring Spike Trains The following methods assume that cell segmentation has already been performed by some means, and that the fluorescence signal of individual pixels has been summed up for each cell and every time step. They can hence concentrate exclusively on the estimation of a “truly sparse” representation of the respective activities in terms of a “spike train”. Data We follow [24, 5] in generating 1100 sequences consisting of one-sided exponential decays with a constant amplitude of 1 and decay rate τ = 1/2s, sampled at 30fps with firing rates ranging uniformly from 1 to 10Hz and different Gaussian noise levels σ ∈[0.1, 0.6]. Fast non-negative deconvolution (FAST) [24] uses a one-sided exponential decay as parametric model for the impulse response by invoking a first-order autoregressive process. Given that our artificial data is free of a nuisance background signal, we disregard its ability to also model such background. The sole remaining parameter, the rate of the exponential decay, can be fit using maximum likelihood estimation or a method-of-moments approach [15]. Peeling [5] finds spikes by means of a greedy approach that iteratively removes one impulse response at a time from the residual fluorescence signal. Importantly, this stereotypical transient must be manually defined a priori. Sparse temporal deconvolution (STD) with a single impulse response is a special case of this work for given nonoverlapping cell segmentations and L = 1; and it is also a special case of [14]. The impulse response can be specified beforehand (amounting to sparse coding), or learned from the data (that is, performing dictionary learning on time-series data). 3.2 Segmenting Cells and Estimating Activities Data Following the procedure described in [4, 12, 13], we have created 80 synthetic sequences with a duration of 15s each at a frame rate of 30fps with image sizes M = N = 512 pixels. The cells are randomly selected from 36 cell shapes extracted from real data, and are randomly located in different locations with a maximum spatial overlap of 30%. Each cell fires according to a dependent Poisson process, and its activation pattern follows a one-sided exponential decay with 5 a scale selected uniform randomly between 500 and 800ms. The average number of active cells per frame varies from 1 to 10. Finally, the data has been distorted by additive white Gaussian noise with a relative amplitude (max. intensity −mean intensity)/σnoise ∈{3, 5, 7, 10, 12, 15, 17, 20}. By construction, the identity, location and activity patterns of all cells are known. The supplemental material shows an example with its corresponding inferred neural activity. Real-world data comes from two-photon microscopy of mouse motor cortex recorded in vivo [7] which has been motion-corrected. These sequences allow us to conduct qualitative experiments. ADINA [4] relies on dictionary learning [11] to find both spatial components and their time courses. Both have many zero coefficients, but are not “truly sparse” in the sense of this paper. The method comes with a heuristic post-processing to separate coactivated cells into distinct spatial components. NMF+ADINA uses non-negative matrix factorization to infer both the spatial and temporal primitives of an image sequence as in [12, 15]. In contrast to [15] which uses a k-means clustering of highly confident spike vectors to provide a good initialization in the search for spatial components, we couple NMF with the postprocessing of ADINA. CSBC+SC combines convolutional sparse block coding [14] based on a single still image (obtained from the temporal mean or median image, or a maximum intensity projection across time) with temporal sparse coding. CSBC+STD combines convolutional sparse block coding [14] based on a single still image (obtained from the temporal mean or median image, or a maximum intensity projection across time) with the proposed sparse temporal deconvolution in Sect. 3.1. SSTD is the method described here. We used J = L = 2, K = 200, F = 200 and H = 31, 15 for the artificial and real data, respectively. 4 Results 4.1 Inferring spike trains To quantify the accuracy of activity detection, we first threshold the estimated activities and then compute, by summing over each step in every time series, the number of true and false negatives and positives. For a fair comparison, the thresholds were adjusted separately for each method to give optimal accuracy. Sensitivity, precision and accuracy computed from the above implicitly measure both the quality of the segmentation and the quality of the activity estimation. An additional measure, SPIKE distance [9], emphasizes any temporal deviations between the true and estimated spike location in a truly sparse representation. Fig. 3 shows that, unsurprisingly, best results are obtained when methods use the true impulse response rather than learning it from the data. This finding does not carry over to real data, where a “true” impulse response is typically not known. Given the true impulse response, both FAST and STD fare better than Peeling, showing that a greedy algorithm is faster but gives somewhat worse results. Even when learning the impulse response, FAST and STD are no worse than Peeling. When learning the parameters, FAST has an advantage over STD on this artificial data because FAST already uses the correct parametric form of the impulse response that was used to generate the data and only needs to learn a single parameter; while STD learns a more general but nonparametric activity model with many degrees of freedom. The great spread of all quality measures results from the wide range of noise levels used, and the overall deficiencies in accuracy attest to the difficulty of these simulated data sets. 4.2 Segmenting Cells and Inferring spike trains Fig. 4 shows that all the methods from Sect. 3.2 reach respectable and comparable performance in the task of identifying neural activity from non-trivial synthetic image sequences. CSBC+SC reaches the highest sensitivity while SSTD has the greatest precision. SSTD apparently achieves comparable performance to the other methods without the need for a heuristic pre- or postprocessing. Multiple random initializations lead to similar learned filters (results not shown), 6 0 20 40 60 80 100 Accuracy (%) 0 20 40 60 80 100 Precision (%) 0 20 40 60 80 100 STD (learned param.) STD (fixed param.) Peeling (fixed param.) FAST (learned param.) FAST (fixed param.) Sensitivity (%) 0 20 40 60 80 100 Sensitivity (%) 0 0.1 0.2 0.3 0.4 SPIKE distance Figure 3: Sensitivity, precision, accuracy (higher is better) and SPIKE distance (lower is better) of different methods for spike train estimation. Methods that need to learn the activation pattern perform worse than those using the true (but generally unknown) activation pattern and its parameters. FAST is at an advantage here because it happens to use the very impulse response that was used in generating the data. so the optimization problem seems to be well-posed. The price to pay for the elegance of a unified formulation is a much higher computational cost of this more involved optimization. Again, the spread of sensitivities, precisions and accuracies results from the range of noise levels used in the simulations. The plots suggest that SSTD may have fewer “catastrophic failure” cases, but an even larger set of sequences will be required to verify this tendency. 50 60 70 80 90 100 SSTD CSBC+STD CSBC+SC NNMF+ADINA ADINA Sensitivity (%) 50 60 70 80 90 100 Accuracy (%) 50 60 70 80 90 100 Precision (%) 50 60 70 80 90 100 Sensitivity (%) Figure 4: Quality of cell detection and and the estimation of their activities. SSTD does as well as the competing methods that rely on heuristic pre- or post-processing. Real Sequences: Qualitative results are shown in Fig. 5. SSTD is able to distinguish both cells with spatial overlap and with high temporal correlation. It compensates large variations in luminance and contrast, and can discriminate between different types of cells. Exploiting truly sparse but independent representations in both the spatial and the temporal domain allows to infer plausible neural activity and, at the same time, reduce the noise in the underlying Calcium image sequence. 5 Discussion The proposed SSTD combines the decomposition of the data into low-rank components with the finding of a convolutional sparse representation for each of those components. The formalism allows exploiting sparseness and the repetitive motifs that are so characteristic of biological data. Users need to choose the number and size of filters that indirectly determine the number of cell types found and their activation patterns. As shown in Fig. 5, the approach gives credible interpretations of raw data in terms of an extremely sparse and hence parsimonious representation. The decomposition of a spacetime volume into a Cartesian product of spatial shapes and their time courses is only possible when cells do not move over time. This assumption holds for in vitro experiments, and can often be satisfied by good fixation in in vivo experiments, but is not universally valid. Correcting for motions in a generalized unified framework is an interesting direction for future work. The experiments in section 4.1 suggest that it may also be worthwhile to investigate the use of more parametric forms for the impulse response instead of the completely unbiased variant used here. 7 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 0 20 40 60 80 100 0 0.05 0.1 0.15 0.2 Frames filter 1 filter 2 0 50 100 150 200 250 151 141 131 121 111 101 91 81 71 61 51 41 31 21 11 1 Time (s) Cell number 0 50 100 150 200 250 152 151 141 131 121 111 101 91 81 71 61 51 41 31 21 11 1 Time (s) Cell number 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 0 50 100 150 200 250 100 91 81 71 61 51 41 31 21 11 1 Time (s) Cell number 0 50 100 150 200 250 100 91 81 71 61 51 41 31 21 11 1 Time (s) Cell number 0 20 40 60 80 100 0 0.05 0.1 0.15 0.2 0.25 0.3 Frames filter 1 filter 2 filter 3 Figure 5: Qualitative results on two real data sets. The data on the left column shows mostly cell bodies, while the data on the right shows both cell bodies (large) and dendrites (small). For each data set, the top left shows an average projection of the relative fluorescence change across time with cell centroids D (black dots) and contours of segmented cells, and the top right shows the learned impulse responses. In the middle, the fluorescence levels integrated over the segmented cells are shown in random colors. The bottom shows by means of small disks the location, type and strength of the impulses that summarize all the data shown in the middle. Together with the cell shapes, the impulses from part of the ”truly sparse” representation that we propose. When convolving these spikes with the impulse responses from the top right insets, we obtain the time courses shown in random colors. Such advances will further help making Calcium imaging an enabling tool for the neurosciences. 8 References [1] D. P. Bertsekas. Nonlinear Programming. Athena Scientific, 1999. [2] A. Chambolle. An algorithm for total variation minimization and applications, 2004. [3] F. Diego and F. A. Hamprecht. Learning multi-level sparse representations. In NIPS. 2013. [4] F. Diego, S. Reichinnek, M. Both, and F. A. Hamprecht. Automated identification of neuronal activity from calcium imaging by sparse dictionary learning. ISBI 2013. Proceedings, pages 1058–1061, 2013. [5] B. F. Grewe, D. Langer, H. Kasper, B. M. Kampa, and F. Helmchen. High-speed in vivo calcium imaging reveals neuronal network activity with near-millisecond precision. Nat Meth, 7(5):399–405, May 2010. [6] C. Grienberger and A. Konnerth. Neuron, volume 73, chapter Imaging Calcium in Neurons, pages 862– 885. Cell Press,, Mar 2012. [7] D. Huber, D. A. Gutnisky, S. Peron, D. H. O/’Connor, J. S. Wiegert, L. Tian, T. G. Oertner, L. L. Looger, and K. Svoboda. Multiple dynamic representations in the motor cortex during sensorimotor learning. Nature, 484(7395):473–478, Apr 2012. [8] K. Kavukcuoglu, P. Sermanet, Y. Boureau, K. Gregor, M. Mathieu, and Y. LeCun. Learning convolutional feature hierachies for visual recognition. In NIPS, 2010. [9] T. Kreuz, D. Chicharro, C. Houghton, R. G. Andrzejak, and F. Mormann. Monitoring spike train synchrony. Journal of Neurophysiology, 2012. [10] H. Luetcke, F. Gerhard, F. Zenke, W. Gerstner, and F. Helmchen. Inference of neuronal network spike dynamics and topology from calcium imaging data. Frontiers in Neural Circuits, 7(201), 2013. [11] J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online Learning for Matrix Factorization and Sparse Coding. Journal of Machine Learning Research, 2010. [12] R. Maruyama, K. Maeda, H. Moroda, I. Kato, M. Inoue, H. Miyakawa, and T. Aonishi. Detecting cells using non-negative matrix factorization on calcium imaging data. Neural Networks, 55(0):11 – 19, 2014. [13] E. A. Mukamel, A. Nimmerjahn, and M. J. Schnitzer. Automated analysis of cellular signals from largescale calcium imaging data. Neuron, 2009. [14] M. Pachitariu, A. M. Packer, N. Pettit, H. Dalgleish, M. Hausser, and M. Sahani. Extracting regions of interest from biological images with convolutional sparse block coding. In NIPS. 2013. [15] E. A. Pnevmatikakis and L. Paninski. Sparse nonnegative deconvolution for compressive calcium imaging: algorithms and phase transitions. In NIPS. 2013. [16] S. Reichinnek, A. von Kameke, A. M. Hagenston, E. Freitag, F. C. Roth, H. Bading, M. T. Hasan, A. Draguhn, and M. Both. Reliable optical detection of coherent neuronal activity in fast oscillating networks in vitro. NeuroImage, 60(1), 2012. [17] R. Rigamonti, A. Sironi, V. Lepetit, and P. Fua. Learning separable filters. In Conference on Computer Vision and Pattern Recognition, 2013. [18] M. N. Schmidt and M. Mørup. Nonnegative matrix factor 2-D deconvolution for blind single channel source separation. In ICA, 2006. [19] P. Smaragdis. Non-negative matrix factor deconvolution; extraction of multiple sound sources from monophonic inputs. In ICA, pages 494–499, 2004. [20] A. Szlam, K. Kavukcuoglu, and Y. LeCun. Convolutional matching pursuit and dictionary training. Computer Research Repository (arXiv), 2010. [21] G. W. Taylor, R. Fergus, Y. Lecun, and C. Bregler. Convolutional learning of spatio-temporal features, 2010. [22] J. Tomek, O. Novak, and J. Syka. Two-photon processor and seneca: a freely available software package to process data from two-photon calcium imaging at speeds down to several milliseconds per frame. J Neurophysiol, 110, 2013. [23] I. Valmianski, A. Y. Shih, J. D. Driscoll, D. W. Matthews, Y. Freund, and D. Kleinfeld. Automatic identification of fluorescently labeled brain cells for rapid functional imaging. Journal of Neurophysiology, 2010. [24] J. T. Vogelstein, A. M. Packer, T. A. Machado, T. Sippy, B. Babadi, R. Yuste, and L. Paninski. Fast non-negative deconvolution for spike train inference from population calcium imaging. Journal of Neurophysiology, 2010. [25] M. Zeiler, D. Krishnan, G. Taylor, and R. Fergus. Deconvolutional networks. In CVPR, 2010. 9
|
2014
|
194
|
5,285
|
From Stochastic Mixability to Fast Rates Nishant A. Mehta Research School of Computer Science Australian National University nishant.mehta@anu.edu.au Robert C. Williamson Research School of Computer Science Australian National University and NICTA bob.williamson@anu.edu.au Abstract Empirical risk minimization (ERM) is a fundamental learning rule for statistical learning problems where the data is generated according to some unknown distribution P and returns a hypothesis f chosen from a fixed class F with small loss ℓ. In the parametric setting, depending upon (ℓ, F, P) ERM can have slow (1/√n) or fast (1/n) rates of convergence of the excess risk as a function of the sample size n. There exist several results that give sufficient conditions for fast rates in terms of joint properties of ℓ, F, and P, such as the margin condition and the Bernstein condition. In the non-statistical prediction with expert advice setting, there is an analogous slow and fast rate phenomenon, and it is entirely characterized in terms of the mixability of the loss ℓ(there being no role there for F or P). The notion of stochastic mixability builds a bridge between these two models of learning, reducing to classical mixability in a special case. The present paper presents a direct proof of fast rates for ERM in terms of stochastic mixability of (ℓ, F, P), and in so doing provides new insight into the fast-rates phenomenon. The proof exploits an old result of Kemperman on the solution to the general moment problem. We also show a partial converse that suggests a characterization of fast rates for ERM in terms of stochastic mixability is possible. 1 Introduction Recent years have unveiled central contact points between the areas of statistical and online learning. These include Abernethy et al.’s [1] unified Bregman-divergence based analysis of online convex optimization and statistical learning, the online-to-batch conversion of the exponentially weighted average forecaster (a special case of the aggregating algorithm for mixable losses) which yields the progressive mixture rule as can be seen e.g. from the work of Audibert [2], and most recently Van Erven et al.’s [21] injection of the concept of mixability into the statistical learning space in the form of stochastic mixability. It is this last connection that will be our departure point for this work. Mixability is a fundamental property of a loss that characterizes when constant regret is possible in the online learning game of prediction with expert advice [23]. Stochastic mixability is a natural adaptation of mixability to the statistical learning setting; in fact, in the special case where the function class consists of all possible functions from the input space to the prediction space, stochastic mixability is equivalent to mixability [21]. Just as Vovk and coworkers (see e.g. [24, 8]) have developed a rich convex geometric understanding of mixability, stochastic mixability can be understood as a sort of effective convexity. In this work, we study the O(1/n)-fast rate phenomenon in statistical learning from the perspective of stochastic mixability. Our motivation is that stochastic mixability might characterize fast rates in statistical learning. As a first step, Theorem 5 herein establishes via a rather direct argument that stochastic mixability implies an exact oracle inequality (i.e. with leading constant 1) with a fast rate for finite function classes, and Theorem 7 extends this result to VC-type classes. This result can be understood as a new chapter in an evolving narrative that started with Lee et al.’s [13] seminal paper 1 showing fast rates for agnostic learning with squared loss over convex function classes, and that was continued by Mendelson [18] who showed that fast rates are possible for p-losses (y, ˆy) 7→|y −ˆy|p over effectively convex function classes by passing through a Bernstein condition (defined in (12)). We also show that when stochastic mixability does not hold in a certain sense (described in Section 5), then the risk minimizer is not unique in a bad way. This is precisely the situation at the heart of the works of Mendelson [18] and Mendelson and Williamson [19], which show that having non-unique minimizers is symptomatic of bad geometry of the learning problem. In such situations, there are certain targets (i.e. output conditional distributions) close to the original target under which empirical risk minimization learns (ERM) at a slow rate, where the guilty target depends on the sample size and the target sequence approaches the original target asymptotically. Even the best known upper bounds have constants that blow up in the case of non-unique minimizers. Thus, whereas stochastic mixability implies fast rates, a sort of converse is also true, where learning is hard in a “neighborhood” of statistical learning problems for which stochastic mixability does not hold. In addition, since a stochastically mixable problem’s function class looks convex from the perspective of risk minimization, and since when stochastic mixability fails the function class looks non-convex from the same perspective (it has multiple well-separated minimizers), stochastic mixability characterizes the effective convexity of the learning problem from the perspective of risk minimization. Much of the recent work in obtaining faster learning rates in agnostic learning has taken place in settings where a Bernstein condition holds, including results based on local Rademacher complexities [3, 10]. The Bernstein condition appears to have first been used by Bartlett and Mendelson [4] in their analysis of ERM; this condition is subtly different from the margin condition of Mammen and Tsybakov [15, 20], which has been used to obtain fast rates for classification. Lecu´e [12] pinpoints that the difference between the two conditions is that the margin condition applies to the excess loss relative to the best predictor (not necessarily in the model class) whereas the Bernstein condition applies to the excess loss relative to the best predictor in the model class. Our approach in this work is complementary to the approaches of previous works, coming from a different assumption that forms a bridge to the online learning setting. Yet this assumption is related; the Bernstein condition implies stochastic mixability under a bounded losses assumption [21]. Further understanding the connection between the Bernstein condition and stochastic mixability is an ongoing effort. Contributions. The core contribution of this work is to show a new path to the ˜O(1/n)-fast rate in statistical learning. We are not aware of previous results that show fast rates from the stochastic mixability assumption. Secondly, we establish intermediate learning rates that interpolate between the fast and slow rate under a weaker notion of stochastic mixability. Finally, we show that in a certain sense stochastic mixability characterizes the effective convexity of the statistical problem. In the next section we formally define the statistical problem, review stochastic mixability, and explain our high-level approach toward getting fast rates. This approach involves directly appealing to the Cram´er-Chernoff method, from which nearly all known concentration inequalities arose in one way or another. In Section 3, we frame the problem of computing a particular moment of a certain excess loss random variable as a general moment problem. We sufficiently bound the optimal value of the moment, which allows for a direct application of the Cram´er-Chernoff method. These results easily imply a fast rates bound for finite classes that can be extended to parametric (VC-type) classes, as shown in Section 4. We describe in Section 5 how stochastic mixability characterizes a certain notion of convexity of the statistical learning problem. In Section 6, we extend the fast rates results to classes that obey a notion we call weak stochastic mixability. Finally, Section 7 concludes this work with connections to related topics in statistical learning theory and a discussion of open problems. 2 Stochastic mixability, Cram´er-Chernoff, and ERM Let (ℓ, F, P) be a statistical learning problem with ℓ: Y × R →R+ a nonnegative loss, F ⊂RX a compact function class, and P a probability measure over X ×Y for input space X and output/target space Y. Let Z be a random variable defined as Z = (X, Y ) ∼P. We assume for all f ∈F, ℓ(Y, f(X)) ≤V almost surely (a.s.) for some constant V . A probability measure P operates on functions and loss-composed functions as: P f = E(X,Y )∼P f(X) P ℓ(·, f) = E(X,Y )∼P ℓ Y, f(X) . 2 Similarly, an empirical measure Pn associated with an n-sample z, comprising n iid samples (x1, y1), . . . , (xn, yn), operates on functions and loss-composed functions as: Pn f = 1 n n X j=1 f(xj) Pn ℓ(·, f) = 1 n n X j=1 ℓ yj, f(xj) . Let f ∗be any function for which P ℓ(·, f ∗) = inff∈F P ℓ(·, f). For each f ∈F define the excess risk random variable Zf := ℓ Y, f(X) −ℓ Y, f ∗(X) . We frequently work with the following two subclasses. For any ε > 0, define the subclasses F⪯ε := {f ∈F : P Zf ≤ε} F⪰ε := {f ∈F : P Zf ≥ε} . 2.1 Stochastic mixability For η > 0, we say that (ℓ, F, P) is η-stochastically mixable if for all f ∈F log E exp(−ηZf) ≤0. (1) If η-stochastic mixability holds for some η > 0, then we say that (ℓ, F, P) is stochastically mixable. Throughout this paper it is assumed that the stochastic mixability condition holds, and we take η∗to be the largest η such that η-stochastic mixability holds. Condition (1) has a rich history, beginning from the foundational thesis of Li [14] who studied the special case of η∗= 1 in density estimation with log loss from the perspective of information geometry. The connections that Li showed between this condition and convexity were strengthened by Gr¨unwald [6, 7] and Van Erven et al. [21]. 2.2 Cram´er-Chernoff The high-level strategy taken here is to show that with high probability ERM will not select a fixed hypothesis function f with excess risk above a n for some constant a > 0. For each hypothesis, this guarantee will flow from the Cram´er-Chernoff method [5] by controlling the cumulant generating function (CGF) of −Zf in a particular way to yield exponential concentration. This control will be possible because the η∗-stochastic mixability condition implies that the CGF of −Zf takes the value 0 at some η ≥η∗, a fact later exploited by our key tool Theorem 3. Let Z be a real-valued random variable. Applying Markov’s inequality to an exponentially transformed random variable yields that, for any η ≥0 and t ∈R Pr(Z ≥t) ≤exp(−ηt + log E exp(ηZ)); (2) the inequality is non-trivial only if t > E Z and η > 0. 2.3 Analysis of ERM We consider the ERM estimator ˆfz := arg minf∈F Pn ℓ(·, f). That is, given an n-sample z, ERM selects any ˆfz ∈F minimizing the empirical risk Pn ℓ(·, f). We say ERM is ε-good when ˆfz ∈F⪯ε. In order to show that ERM is ε-good it is sufficient to show that for all f ∈F \ F⪯ε we have P Zf > 0. The goal is to show that with high probability ERM is ε-good, and we will do this by showing that with high probability uniformly for all f ∈F \ F⪯ε we have Pn Zf > t for some slack t > 0 that will come in handy later. For a real-valued random variable X, recall that the cumulant generating function of X is η 7→ ΛX(η) := log E eηX; we allow ΛX(η) to be infinite for some η > 0. Theorem 1 (Cram´er-Chernoff Control on ERM). Let a > 0 and select f such that E Zf > 0. Let t < E Zf. If there exists η > 0 such that Λ−Zf (η) ≤−a n, then Pr n Pn ℓ(·, f) ≤Pn ℓ(·, f ∗) + t o ≤exp(−a + ηt). Proof. Let Zf,1, . . . , Zf,n be iid copies of Zf, and define the sum Sf,n := Pn j=1 −Zf,j. Since (−t) > E 1 nSf,n, then from (2) we have Pr 1 n n X j=1 Zf,j ≤t = Pr 1 nSf,n ≥−t ≤exp (ηt + log E exp(ηSf,n)) = exp(ηt) E exp(−ηZf) n. 3 Making the replacement Λ−Zf (η) = log E exp(−ηZf) yields log Pr 1 nSf,n ≥−t ≤ηt + nΛ−Zf (η). By assumption, Λ−Zf (η) ≤−a n, and so Pr{Pn Zf ≤t} ≤exp(−a + ηt) as desired. This theorem will be applied by showing that for an excess loss random variable Zf taking values in [−1, 1], if for some η > 0 we have E exp(−ηZf) = 1 and if E Zf = a n for some constant a (that can and must depend on n), then Λ−Zf (η/2) ≤−cηa n where c > 0 is a universal constant. This is the nature of the next section. We then extend this result to random variables taking values in [−V, V ]. 3 Semi-infinite linear programming and the general moment problem The key subproblem now is to find, for each excess loss random variable Zf with mean a n and Λ−Zf (η) = 0 (for some η ≥η∗), a pair of constants η0 > 0 and c > 0 for which Λ−Zf (η0) ≤−ca n . Theorem 1 would then imply that ERM will prefer f ∗over this particular f with high probability for ca large enough. This subproblem is in fact an instance of the general moment problem, a problem on which Kemperman [9] has conducted a very nice geometric study. We now describe this problem. The general moment problem. Let P(A) be the space of probability measures over a measurable space A = (A, S). For real-valued measurable functions h and (gj)j∈[m] on a measurable space A = (A, S), the general moment problem is inf µ∈P(A) EX∼µ h(X) subject to EX∼µ gj(X) = yj, j ∈{1, . . . , m}. (3) Let the vector-valued map g : A →Rm be defined in terms of coordinate functions as (g(x))j = gj(x), and let the vector y ∈Rm be equal to (y1, . . . , ym). Let D∗⊂Rm+1 be the set D∗:= d∗= (d0, d1, . . . , dm) ∈Rm+1 : h(x) ≥d0 + m X j=1 djgj(x) for all x ∈A . (4) Theorem 3 of [9] states that if y ∈int conv g(A), the optimal value of problem (3) equals sup d0 + m X j=1 djyj : d∗= (d0, d1, . . . , dm) ∈D∗ . (5) Our instantiation. We choose A = [−1, 1], set m = 2 and define h, (gj)j∈{1,2}, and y ∈R2 as: h(x) = −e(η/2)x, g1(x) = x, g2(x) = eηx, y1 = −a n, y2 = 1, for any η > 0, a > 0, and n ∈N. This yields the following instantiation of problem (3): inf µ∈P([−1,1]) EX∼µ −e(η/2)X (6a) subject to EX∼µ X = −a n (6b) EX∼µ eηX = 1. (6c) Note that equation (5) from the general moment problem now instantiates to sup n d0 −a nd1 + d2 : d∗= (d0, d1, d2) ∈D∗o , (7) with D∗equal to the set n d∗= (d0, d1, d2) ∈R3 : −e(η/2)x ≥d0 + d1x + d2eηx for all x ∈[−1, 1] o . (8) Applying Theorem 3 of [9] requires the condition y ∈int conv g([−1, 1]). We first characterize when y ∈conv g([−1, 1]) holds and handle the int conv g([−1, 1]) version after Theorem 3. 4 Lemma 2 (Feasible Moments). The point y = −a n, 1 ∈conv g([−1, 1]) if and only if a n ≤eη + e−η −2 eη −e−η = cosh(η) −1 sinh(η) . (9) Proof. Let W denote the convex hull of g([−1, 1]). We need to see if −a n, 1 ∈W. Note that W is the convex set formed by starting with the graph of x 7→eηx on the domain [−1, 1], including the line segment connecting this curve’s endpoints (−1, e−η) to (1, eηx), and including all of the points below this line segment but above the aforementioned graph. That is, W is precisely the set W := (x, y) ∈R2 : eηx ≤y ≤eη + e−η 2 + eη −e−η 2 x, ∀x ∈[−1, 1] . It remains to check that 1 is sandwiched between the lower and upper bounds at x = −a n. Clearly the lower bound holds. Simple algebra shows that the upper bound is equivalent to condition (9). Note that if (9) does not hold, then the semi-infinite linear program (6) is infeasible; infeasibility in turn implies that such an excess loss random variable cannot exist. Thus, we need not worry about whether (9) holds; it holds for any excess loss random variable satisfying constraints (6b) and (6c). The following theorem is a key technical result for using stochastic mixability to control the CGF. The proof is long and can be found in Appendix A. Theorem 3 (Stochastic Mixability Concentration). Let f be an element of F with Zf taking values in [−1, 1], n ∈N, E Zf = a n for some a > 0, and Λ−Zf (η) = 0 for some η > 0. If a n < eη + e−η −2 eη −e−η , (10) then E e(η/2)(−Zf ) ≤1 −0.18(η ∧1)a n . Note that since log(1 −x) ≤−x when x < 1, we have Λ−Zf (η/2) ≤−0.18(η ∧1)a n . In order to apply Theorem 3, we need (10) to hold, but only (9) is guaranteed to hold. The corner case is if (9) holds with equality. However, observe that one can always approximate the random variable X by a perturbed version X′ which has nearly identical mean a′ ≈a and a nearly identical η′ ≈η for which EX′∼µ′ eη′X′ = 1, and yet the inequality in (9) is strict. Later, in the proof of Theorem 5, for any random variable that required perturbation to satisfy the interior condition (10), we implicitly apply the analysis to the perturbed version, show that ERM would not pick the (slightly different) function corresponding to the perturbed version, and use the closeness of the two functions to show that ERM also would not pick the original function. We now present a necessary extension for the case of losses with range [0, V ], proved in Appendix A. Lemma 4 (Bounded Losses). Let g1(x) = x and y2 = 1 be common settings for the following two problems. The instantiation of problem (3) with A = [−V, V ], h(x) = −e(η/2)x, g2(x) = eηx, and y1 = −a n has the same optimal value as the instantiation of problem (3) with A = [−1, 1], h(x) = −e(V η/2)x, g2(x) = e(V η)x, and y1 = −a/V n . 4 Fast rates We now show how the above results can be used to obtain an exact oracle inequality with a fast rate. We first present a result for finite classes and then present a result for VC-type classes (classes with logarithmic universal metric entropy). Theorem 5 (Finite Classes Exact Oracle Inequality). Let (ℓ, F, P) be η∗-stochastically mixable, where |F| = N, ℓis a nonnegative loss, and supf∈F ℓ Y, f(X) ≤V a.s. for a constant V . Then for all n ≥1, with probability at least 1 −δ P ℓ(·, ˆfz) ≤P ℓ(·, f ∗) + 6 max n V, 1 η∗ o log 1 δ + log N n . 5 Proof. Let γn = a n for a constant a to be fixed later. For each η > 0, let F(η) ⪰γn ⊂F⪰γn correspond to those functions in F⪰γn for which η is the largest constant such that E exp(−ηZf) = 1. Let Fhyper ⪰γn ⊂F⪰γn correspond to functions f in F⪰γn for which limη→∞E exp(−ηZf) < 1. Clearly, F⪰γn = S η∈[η∗,∞) F(η) ⪰γn ∪Fhyper ⪰γn . The excess loss random variables corresponding to elements f ∈Fhyper ⪰γn are “hyper-concentrated” in the sense that they are infinitely stochastically mixable. However, Lemma 10 in Appendix B shows that for each hyper-concentrated Zf, there exists another excess loss random variable Z′ f with mean arbitrarily close to that of Zf, with E exp(−ηZ′ f) = 1 for some arbitrarily large but finite η, and with Z′ f ≤Zf with probability 1. The last property implies that the empirical risk of Z′ f is no greater than that of Zf; hence for each hyper-concentrated Zf it is sufficient (from the perspective of ERM) to study a corresponding Z′ f. From now on, we implicitly make this replacement in F⪰γn itself, so that we now have F⪰γn = S η∈[η∗,∞) F(η) ⪰γn. Consider an arbitrary a > 0. For some fixed η ∈[η∗, ∞) for which |F(η) ⪰γn| > 0, consider the subclass F(η) ⪰γn. Individually for each such function, we will apply Theorem 1 as follows. From Lemma 4, we have Λ−Zf (η/2) = Λ−1 V Zf (V η/2). From Theorem 3, the latter is at most −0.18(V η ∧1)(a/V ) n = − 0.18ηa (V η ∨1)n . Hence, Theorem 1 with t = 0 and the η from the Theorem taken to be η/2 implies that the probability of the event Pn ℓ(·, f) ≤Pn ℓ(·, f ∗) is at most exp −0.18 η V η ∨1a . Applying the union bound over all of F⪰γn, we conclude that Pr {∃f ∈F⪰γn : Pn ℓ(·, f) ≤Pn ℓ(·, f ∗)} ≤N exp −η∗ 0.18a V η∗∨1 . Since ERM selects hypotheses on their empirical risk, from inversion it holds that with probability at least 1 −δ ERM will not select any hypothesis with excess risk at least 6 max{V, 1 η∗}(log 1 δ +log N) n . Before presenting the result for VC-type classes, we require some definitions. For a pseudometric space (G, d), for any ε > 0, let N(ε, G, d) be the ε-covering number of (G, d); that is, N(ε, G, d) is the minimal number of balls of radius ε needed to cover G. We will further constrain the cover (the set of centers of the balls) to be a subset of G (i.e. to be proper), thus ensuring that the stochastic mixability assumption transfers to any (proper) cover of F. Note that the “proper” requirement at most doubles the constant K below, as shown by Vidyasagar [22, Lemma 2.1]. We now state a localization-based result that allows us to extend the result for finite classes to VCtype classes. Although the localization result can be obtained by combining standard techniques,1 we could not find this particular result in the literature. Below, an ε-net Fε of a set F is a subset of F such that F is contained in the union of the balls of radius ε with centers in Fε. Theorem 6. Let F be a separable function class whose functions have range bounded in [0, V ] and for which, for a constant K ≥1, for each u ∈(0, K] the L2(P) covering numbers are bounded as N(u, F, L2(P)) ≤ K u C . (11) Suppose Fε is a minimal ε-net for F in the L2(P) norm, with ε = 1 n. Denote by π : F →Fε an L2(P)-metric projection from F to Fε. Then, provided that δ ≤1 2, with probability at most δ can there exist f ∈F such that Pn f < Pn(π(f)) −V n 1080C log(2Kn) + 90 s log 1 δ C log(2Kn) + log e δ ! . The proof is presented in Appendix C. We now present the fast rates result for VC-type classes. The proof (in Appendix C) uses Theorem 6 and the proof of the Theorem 5. Below, we denote the loss-composed version of a function class F as ℓ◦F := {ℓ(·, f) : f ∈F}. 1See e.g. the techniques of Massart and N´ed´elec [16] and equation (3.17) of Koltchinskii [11]. 6 Theorem 7 (VC-Type Classes Exact Oracle Inequality). Let (ℓ, F, P) be η∗-stochastically mixable with ℓ◦F separable, where, for a constant K ≥1, for each ε ∈(0, K] we have N(ℓ◦F, L2(P), ε) ≤ K ε C, and supf∈F ℓ Y, f(X) ≤V a.s. for a constant V ≥1. Then for all n ≥5 and δ ≤1 2, with probability at least 1 −δ P ℓ(·, ˆfz) ≤P ℓ(·, f ∗) + 1 n max 8 max n V, 1 η∗ o C log(Kn) + log 2 δ , 2V 1080C log(2Kn) + 90 q log 2 δ C log(2Kn) + log 2e δ + 1 n. 5 Characterizing convexity from the perspective of risk minimization In the following, when we say (ℓ, F, P) has a unique minimizer we mean that any two minimizers f ∗ 1 , f ∗ 2 of P ℓ(·, f) over F satisfy ℓ Y, f ∗ 1 (X) = ℓ Y, f ∗ 2 (X) a.s. We say the excess loss class {ℓ(·, f) −ℓ(·, f ∗) : f ∈F} satisfies a (β, B)-Bernstein condition with respect to P for some B > 0 and 0 < β ≤1 if, for all f ∈F: P ℓ(·, f) −ℓ(·, f ∗) 2 ≤B P ℓ(·, f) −ℓ(·, f ∗) β . (12) It already is known that the stochastic mixability condition guarantees that there is a unique minimizer [21]; this is a simple consequence of Jensen’s inequality. This leaves open the question: if stochastic mixability does not hold, are there necessarily non-unique minimizers? We show that in a certain sense this is indeed the case, in bad way: the set of minimizers will be a disconnected set. For any ε > 0, define Gε as the class Gε := {f ∗} ∪ f ∈F : ∥f −f ∗∥L1(P) ≥ε , where in case there are multiple minimizers in F we arbitrarily select one of them as f ∗. Since we assume that F is compact and Gε \ {f ∗} is equal to F minus an open set homeomorphic to the unit L1(P) ball, Gε \ {f ∗} is also compact. Theorem 8 (Non-Unique Minimizers). Suppose there exists some ε > 0 such that Gε is not stochastically mixable. Then there are minimizers f ∗ 1 , f ∗ 2 ∈F of P ℓ(·, f) over F such that it is not the case that ℓ Y, f ∗ 1 (X) = ℓ Y, f ∗ 2 (X) a.s. Proof. Select ε > 0 as in the theorem and some fixed η > 0. Since Gε is not η-stochastically mixable, there exists fη ∈Gε such that Λ−Zfη (η) > 0. Note that there exists η′ ∈(0, η) with Λ−Zfη (η′) = 0; if not, limη↓0 Λ−Zfη (η)−Λ−Zfη (0) η > 0 ⇒Λ′ −Zfη (0) > 0, so Λ′ −Zfη (0) = E(−Zfη) implies that E Zfη < 0, a contradiction! From Lemma 2, E Zfη ≤cosh(η′)−1 sinh(η′) ; for η′ ≥0 the RHS has upper bound η′ 2 since the derivative of η′ 2 −cosh(η′)−1 sinh(η′) is the nonnegative function 1 2 tanh2(η′/2) and η′ 2 −cosh(η′)−1 sinh(η′) |η′=0 = 0. Thus, E Zfη →0 as η →0. As Gε \{f ∗} is compact, we can take a positive decreasing sequence (ηj)j approaching 0, corresponding to a sequence (fηj)j ⊂Gε\{f ∗} with limit point g∗∈Gε \{f ∗} for which E Zg∗= 0, and so there is a risk minimizer in Gε \{f ∗}. The implications of having non-unique risk minimizers. In the case of non-unique risk minimizers, Mendelson [17] showed that for p-losses (y, ˆy) 7→|y −ˆy|p with p ∈[2, ∞) there is an n-indexed sequence of probability measures (P(n))n approaching the true probability measure as n →∞such that, for each n, ERM learns at a slow rate under sample size n when the true distribution is P(n). This behavior is a consequence of the statistical learning problem’s poor geometry: there are multiple minimizers and the set of minimizers is not even connected. Furthermore, in this case, the best known fast rate upper bounds (see [18] and [19]) have a multiplicative constant that approaches ∞as the target probability measure approaches a probability measure for which there are non-unique minimizers. The reason for the poor upper bounds in this case is that the constant B in the Bernstein condition explodes, and the upper bounds rely upon the Bernstein condition. 6 Weak stochastic mixability For some κ ∈[0, 1], we say (ℓ, F, P) is (κ, η0)-weakly stochastically mixable if, for every ε > 0, for all f ∈{f ∗} ∪F⪰ε, the inequality log E exp(−ηεZf) ≤0 holds with ηε := η0ε1−κ. This concept was introduced by Van Erven et al. [21] without a name. 7 Suppose that some fixed function has excess risk a = ε. Then, roughly, with high probability ERM does not make a mistake provided that aηa = 1 n, i.e. when ε · η0ε1−κ = 1 n and hence when ε = (η0n)−1/(2−κ). Modifying the proof of the finite classes result (Theorem 5) to consider all functions in the subclass F⪰γn for γn = (η0n)−1/(2−κ) yields the following corollary of Theorem 5. Corollary 9. Let (ℓ, F, P) be (κ, η0)-weakly stochastically mixable for some κ ∈[0, 1], where |F| = N, ℓis a nonnegative loss, and supf∈F ℓ Y, f(X) ≤V a.s. for a constant V . Then for any n ≥ 1 η0 V (1−κ)/(2−κ), with probability at least 1 −δ P ℓ(·, ˆfz) ≤P ℓ(·, f ∗) + 6 log 1 δ + log N (η0n)1/(2−κ) . It is simple to show a similar result for VC-type classes; the ε-net can still be taken at the resolution 1 n, but we need only apply the analysis to the subclass of F with excess risk at least (η0n)−1/(2−κ). 7 Discussion We have shown that stochastic mixability implies fast rates for VC-type classes, using a direct argument based on the Cram´er-Chernoff method and sufficient control of the optimal value of a certain instance of the general moment problem. The approach is amenable to localization in that the analysis separately controls the probability of large deviations for individual elements of F. An important open problem is to extend the results presented here for VC-type classes to results for nonparametric classes with polynomial metric entropy, and moreover, to achieve rates similar to those obtained for these classes under the Bernstein condition. There are still some unanswered questions with regards to the connection between the Bernstein condition and stochastic mixability. Van Erven et al. [21] showed that for bounded losses the Bernstein condition implies stochastic mixability. Therefore, when starting from a Bernstein condition, Theorem 5 offers a different path to fast rates. An open problem is to settle the question of whether the Bernstein condition and stochastic mixability are equivalent. Previous results [21] suggest that the stochastic mixability does imply a Bernstein condition, but the proof was non-constructive, and it relied upon a bounded losses assumption. It is well known (and easy to see) that both stochastic mixability and the Bernstein condition hold only if there is a unique minimizer. Theorem 8 shows in a certain sense that if stochastic mixability does not hold, then there cannot be a unique minimizer. Is the same true when the Bernstein condition fails to hold? Regardless of whether stochastic mixability is equivalent to the Bernstein condition, the direct argument presented here and the connection to classical mixability, which does characterize constant regret in the simpler non-stochastic setting, motivates further study of stochastic mixability. Finally, it would be of great interest to discard the bounded losses assumption. Ignoring the dependence of the metric entropy on the maximum possible loss, the upper bound on the loss V enters the final bound through the difficulty of controlling the minimum value of uη(−1) when η is large (see the proof of Theorem 3). From extensive experiments with a grid-approximation linear program, we have observed that the worst (CGF-wise) random variables for fixed negative mean and fixed optimal stochastic mixability constant are those which place very little probability mass at −V and most of the probability mass at a small positive number that scales with the mean. These random variables correspond to functions that with low probability beat f ∗by a large (loss) margin but with high probability have slightly higher loss than f ∗. It would be useful to understand if this exotic behavior is a real concern and, if not, find a simple, mild condition on the moments that rules it out. Acknowledgments RCW thanks Tim van Erven for the initial discussions around the Cram´er-Chernoff method during his visit to Canberra in 2013 and for his gracious permission to proceed with the present paper without him as an author, and both authors thank him for the further enormously helpful spotting of a serious error in our original proof for fast rates for VC-type classes. This work was supported by the Australian Research Council (NAM and RCW) and NICTA (RCW). NICTA is funded by the Australian Government through the Department of Communications and the Australian Research Council through the ICT Centre of Excellence program. 8 References [1] Jacob Abernethy, Alekh Agarwal, Peter L. Bartlett, and Alexander Rakhlin. A stochastic view of optimal regret through minimax duality. In Proceedings of the 22nd Annual Conference on Learning Theory (COLT 2009), 2009. [2] Jean-Yves Audibert. Fast learning rates in statistical inference through aggregation. The Annals of Statistics, 37(4):1591–1646, 2009. [3] Peter L. Bartlett, Olivier Bousquet, and Shahar Mendelson. Local Rademacher complexities. The Annals of Statistics, 33(4):1497–1537, 2005. [4] Peter L. Bartlett and Shahar Mendelson. Empirical minimization. Probability Theory and Related Fields, 135(3):311–334, 2006. [5] St´ephane Boucheron, G´abor Lugosi, and Pascal Massart. Concentration inequalities: A nonasymptotic theory of independence. Oxford University Press, 2013. [6] Peter Gr¨unwald. Safe learning: bridging the gap between Bayes, MDL and statistical learning theory via empirical convexity. In Proceedings of the 24th International Conference on Learning Theory (COLT 2011), pages 397–419, 2011. [7] Peter Gr¨unwald. The safe Bayesian. In Proceedings of the 23rd International Conference on Algorithmic Learning Theory (ALT 2012), pages 169–183. Springer, 2012. [8] Yuri Kalnishkan and Michael V. Vyugin. The weak aggregating algorithm and weak mixability. In Proceedings of the 18th Annual Conference on Learning Theory (COLT 2005), pages 188–203. Springer, 2005. [9] Johannes H.B. Kemperman. The general moment problem, a geometric approach. The Annals of Mathematical Statistics, 39(1):93–122, 1968. [10] Vladimir Koltchinskii. Local Rademacher complexities and oracle inequalities in risk minimization. The Annals of Statistics, 34(6):2593–2656, 2006. [11] Vladimir Koltchinskii. Oracle Inequalities in Empirical Risk Minimization and Sparse Recovery Problems: Ecole dEt´e de Probabilit´es de Saint-Flour XXXVIII-2008, volume 2033. Springer, 2011. [12] Guillaume Lecu´e. Interplay between concentration, complexity and geometry in learning theory with applications to high dimensional data analysis. Habilitation `a diriger des recherches, Universit´e ParisEst, 2011. [13] Wee Sun Lee, Peter L. Bartlett, and Robert C. Williamson. The importance of convexity in learning with squared loss. IEEE Transactions on Information Theory, 44(5):1974–1980, 1998. [14] Jonathan Qiang Li. Estimation of mixture models. PhD thesis, Yale University, 1999. [15] Enno Mammen and Alexandre B. Tsybakov. Smooth discrimination analysis. The Annals of Statistics, 27(6):1808–1829, 1999. [16] Pascal Massart and ´Elodie N´ed´elec. Risk bounds for statistical learning. The Annals of Statistics, 34(5):2326–2366, 2006. [17] Shahar Mendelson. Lower bounds for the empirical minimization algorithm. IEEE Transactions on Information Theory, 54(8):3797–3803, 2008. [18] Shahar Mendelson. Obtaining fast error rates in nonconvex situations. Journal of Complexity, 24(3):380– 397, 2008. [19] Shahar Mendelson and Robert C. Williamson. Agnostic learning nonconvex function classes. In Proceedings of the 15th Annual Conference on Computational Learning Theory (COLT 2002), pages 1–13. Springer, 2002. [20] Alexander B. Tsybakov. Optimal aggregation of classifiers in statistical learning. The Annals of Statistics, 32(1):135–166, 2004. [21] Tim Van Erven, Peter D. Gr¨unwald, Mark D. Reid, and Robert C. Williamson. Mixability in statistical learning. In Advances in Neural Information Processing Systems 25 (NIPS 2012), pages 1700–1708, 2012. [22] Mathukumalli Vidyasagar. Learning and Generalization with Applications to Neural Networks. Springer, 2002. [23] Volodya Vovk. A game of prediction with expert advice. Journal of Computer and System Sciences, 56(2):153–173, 1998. [24] Volodya Vovk. Competitive on-line statistics. International Statistical Review, 69(2):213–248, 2001. 9
|
2014
|
195
|
5,286
|
Algorithm selection by rational metareasoning as a model of human strategy selection Falk Lieder Helen Wills Neuroscience Institute, UC Berkeley falk.lieder@berkeley.edu Dillon Plunkett Department of Psychology, UC Berkeley dillonplunkett@berkeley.edu Jessica B. Hamrick Department of Psychology, UC Berkeley jhamrick@berkeley.edu Stuart J. Russell EECS Department, UC Berkeley russell@cs.berkeley.edu Nicholas J. Hay EECS Department, UC Berkeley nickjhay@berkeley.edu Thomas L. Griffiths Department of Psychology, UC Berkeley tom griffiths@berkeley.edu Abstract Selecting the right algorithm is an important problem in computer science, because the algorithm often has to exploit the structure of the input to be efficient. The human mind faces the same challenge. Therefore, solutions to the algorithm selection problem can inspire models of human strategy selection and vice versa. Here, we view the algorithm selection problem as a special case of metareasoning and derive a solution that outperforms existing methods in sorting algorithm selection. We apply our theory to model how people choose between cognitive strategies and test its prediction in a behavioral experiment. We find that people quickly learn to adaptively choose between cognitive strategies. People’s choices in our experiment are consistent with our model but inconsistent with previous theories of human strategy selection. Rational metareasoning appears to be a promising framework for reverse-engineering how people choose among cognitive strategies and translating the results into better solutions to the algorithm selection problem. 1 Introduction To solve complex problems in real-time, intelligent agents have to make efficient use of their finite computational resources. Although there are general purpose algorithms, particular problems can often be solved more efficiently by specialized algorithms. The human mind can take advantage of this fact: People appear to have a toolbox of cognitive strategies [1] from which they choose adaptively [2, 3]. How these choices are made is an important, open question in cognitive science [4]. At an abstract level, choosing a cognitive strategy is equivalent to the algorithm selection problem in computer science [5]: given a set of possible inputs I, a set of possible algorithms A, and a performance metric, find the selection mapping from I to A that maximizes the expected performance. Here, we draw on a theoretical framework from artificial intelligence–rational metareasoning [6]– and Bayesian machine learning to develop a mathematical theory of how people should choose between cognitive strategies and test its predictions in a behavioral experiment. In the first section, we apply rational metareasoning to the algorithm selection problem and derive how the optimal algorithm selection mapping can be efficiently approximated by model-based learning when a small number of features is predictive of the algorithm’s runtime and accuracy. In Section 2, we evaluate the performance of our solution against state-of-the-art methods for sorting 1 algorithm selection. In Sections 3 and 4, we apply our theory to cognitive modeling and report a behavioral experiment demonstrating that people quickly learn to adaptively choose between cognitive strategies in a manner predicted by our model but inconsistent with previous theories. We conclude with future directions at the interface of psychology and artificial intelligence. 2 Algorithm selection by rational metareasoning Metareasoning is the problem of deciding which computations to perform given a problem and a computational architecture [6]. Algorithm selection is a special case of metareasoning in which the choice is limited to a few sequences of computations that generate complete results. According to rational metareasoning [6], the optimal solution maximizes the value of computation (VOC). The VOC is the expected utility of acting after having performed the computation (and additional computations) minus the expected utility of acting immediately. In the general case, determining the VOC requires solving a Markov decision problem [7]. Yet, in the special case of algorithm selection, the hard problem of planning which computations to perform how often and in which order reduces to the simpler one-shot choice between a small number algorithms. We can therefore use the following approximation to the VOC from [6] as the performance metric to be maximized: VOC(a; i) ≈EP (S|a,i) [S] −EP (T |a,i) [TC(T)] (1) m(i) = arg max a∈A VOC(a; i), (2) where a ∈A is one of the available algorithms, i ∈I is the input, S and T are the score and runtime of algorithm a on input i, and TC(T) is the opportunity cost of running the algorithm for T units of time. The score S can be binary (correct vs. incorrect output) or numeric (e.g., error penalty). The selection mapping m defined in Equation 2 depends on the conditional distributions of score and runtime (P(S|a, i) and P(T|a, i)). These distributions are generally unknown, but they can be learned. Learning an approximation to the VOC from experience, i.e. meta-level learning [6], is a hard technical challenge [8], but it is tractable in the special case of algorithm selection. Learning the conditional distributions of score and runtime separately for every possible input is generally intractable. However, in many domains the inputs are structured and can be approximately represented by a small number of features. Concretely, the effect of the input on score and runtime is mediated by its features f = (f1(i), · · · , fN(i)): P(S|a, i) = P(S|f, a) = P(S|f1(i), · · · , fN(i), a) (3) P(T|a, i) = P(T|f, a) = P(T|f1(i), · · · , fN(i), a). (4) If the features are observable and the distributions P(S|f1(i), · · · , fN(i), a) and P(T|f1(i), · · · , fN(i), a) have been learned, then one can very efficiently compute an estimate of the expected value of applying the algorithm to a novel input. To learn the distributions P(S|f1(i), · · · , fN(i), a) and P(T|f1(i), · · · , fN(i), a) from examples, we assume simple parametric forms for these distributions and estimate their parameters from the scores and runtimes of the algorithms on previous problem instances. As a first approximation, we assume that the runtime of an algorithm on problems with features f is normally distributed with mean µ(f; a) and standard deviation σ(f; a). We further assumed that the mean is a 2nd order polynomial in the extended features ˜f = (f1(i), · · · , fN(i), log(f1(i)), · · · , log(fN(i))) and that the variance is independent of the mean: P(T|f; a, α) = N(µT (f; a, α), σT (a)) (5) µT (f; a, α) = 2 X k1=0 · · · 2−PN−1 i=1 ki X kN=0 αk1,··· ,kN;a · ˜f k1 1 · . . . · ˜f kN N (6) P(σT (a)) = Gamma(σ−1 T ; 0.01, 0.01), (7) where α are the regression coefficients. Similarly, we model the probability that the algorithm returns the correct answer by a logistic function of a second order polynomial of the extended features: P(S = 1|a, f, β) = 1 1 + exp P2 k1=0 · · · P2−PN−1 i=1 ki kN=0 βk1,··· ,kN;a · ˜f k1 1 · . . . · ˜f kN N , (8) 2 with regression coefficients β. The conditional distribution of a continuous score can be modeled analogously to Equation 5, and we use γ to denote its regression coefficients. If the time cost is a linear function of the algorithm’s runtime, i.e. TC(t) = c · t for some constant c, then the value of applying the algorithm depends only on the expectations of the runtime and score distributions. For linear scores EP (S,T |a,i) [S −TC(T)] = µS(f(i); a, γ) −c · µT (f(i); a, α), (9) and for binary scores EP (S,T |a,i) [S −TC(T)] = EP (β|s,a,i) [P(S = 1; i, β)] −c · µT (f(i); a, α). (10) We approximated EP (β|s,a,i) [P(S = 1; i, β)] according to Equation 10 in [9]. Thus, the algorithm selection mapping m can be learned by estimating the parameters α and β or γ. Our method estimates α by Bayesian linear regression. When the score is binary, β is estimated by variational Bayesian logistic regression [9], and when the score is continuous, γ is estimated by Bayesian linear regression. For Bayesian linear regression, we use conjugate Gaussian priors with mean zero and unit variance, so that the posterior distributions can be computed very efficiently by analytic update equations. Given the posterior distributions on the parameters, we compute the expected VOC by marginalization. When the score is continuous µS(f(i); a, γ) is linear in γ and µT (f(i); a, α) is linear in α. Thus integrating out α and γ with respect to the posterior yields VOC(a; i) = µS f(i); a, µγ|i,s −c · µT f(i); a, µα|i,t , (11) where µα and µγ are posterior means of α and γ respectively. This implies the following simple solution to the algorithm selection problem: a(i; c) = arg max a∈A µS(f(i); a, µγ|itrain,strain −c · µT (f(i); a, µα|itrain,ttrain)). (12) For binary scores, the runtime component is predicted in exactly the same way, and a variational approximation to the posterior predictive density can be used for the score component [9]. To discover the best model of an algorithm’s runtime and score, our method performs feature selection by Bayesian model choice [10]. We consider all possible combinations of the regressors defined above. To efficiently find the optimal set of features in this exponential large model space, we exploit that all models are nested within the full model. This allows us to efficiently compute Bayes factors using Savage-Dickey ratios [11]. 3 Performance evaluation against methods for selecting sorting algorithms Our goal was to evaluate rational metareasoning not only against existing methods but also against human performance. To facilitate the comparison with how people choose between cognitive strategies, we chose to evaluate our method in the domain of sorting. Algorithm selection is relevant to sorting, because there are many sorting algorithms with very different characteristics. In sorting, the input i is the sequence to be sorted. Conventional sorting algorithms are guaranteed to return the elements in correct order. Thus, the critical difference between them is in their runtimes, and runtime depends primarily on the number of elements to be sorted and their presortedness. The number of elements determines the relative importance of the coefficients of low (e.g., constant and linear) versus high order terms (e.g., n2, or n · log(n)) whose weights differ between algorithms. Presortedness is important because it determines the relative performance of algorithms that exploit pre-existing order, e.g., insertion sort, versus algorithms that do not, e.g., quicksort. According to recent reviews [12, 13], there are two key methods for sorting algorithm selection: Guo’s decision-tree method [14] and Lagoudakis et al.’s recursive algorithm selection method [15]. We thus evaluated the performance of rational metareasoning against these two approaches. 3.1 Evaluation against Guo’s method Guo’s method learns a decision-tree, i.e. a sequence of logical rules that are applied to the list’s features to determine the sorting algorithm [14]. Guo’s method and our method represent inputs by 3 test set performance 95% CI Guo’s performance p-value Dsort5 99.78% [99.7%, 99.9%] 98.5% p < 10−15 nearly sorted lists 99.99% [99.3%, 100%] 99.4% p < 10−15 inversely sorted lists 83.37% [82.7%, 84.1%] 77.0% p < 10−15 random permutations 99.99% [99.2%, 100%] 85.3% p < 10−15 Table 1: Evaluation of rational metareasoning against Guo’s method. Performance was measured by the percentage of problems for which the method chose the fastest algorithm. the same pair of features: f1 = |i|, the length of the list to be sorted, and f2, a measure of presortedness. Concretely, f2 estimates the number of inversions from the number of runs in the sequence, i.e. f2 = f1 2 · RUNS(i), where RUNS(i) = |{m : im > im+1}|. This measure of presortedness can be computed much more efficiently than the number of inversions. Our method learns the conditional distributions of runtime and score given these two features, and uses them to approximate the conditional distributions given the input (Equations 3–4). We verified that our method can learn how runtime depends on sequence length and presortedness (data not shown). Next, we subjected our method to Guo’s performance evaluation [14]. We thus evaluated rational metareasoning on the problem of choosing between insertion sort, shell sort, heapsort, merge sort, and quicksort. We matched our training sets to Guo’s DSort4 in the number of lists (i.e. 1875) and the distributions of length and presortedness. We provided the run-time of all algorithms rather than the index of the fastest algorithm. Otherwise, the training sets were equivalent. For each of Guo’s four test sets, we trained and evaluated rational metareasoning on 100 randomly generated pairs of training and test sets. The first test set mimicked Guo’s Dsort5 problem set [14]. It comprised 1000 permutations of the numbers 1 to 1000. Of the 1000 sequences, 950 were random permutations and 50 were nearly-sorted. The nearly-sorted lists were created by applying 10 random pair-wise permutations to the numbers 1–1000. The sequences contained between 1 and 520 runs (mean=260, SD=110). The second test set comprised 1000 nearly-sorted lists of length 1000. Each list was created by applying 10 different random pair-wise permutations to the numbers 1 to 1000. The third test set comprised 100 lists in reverse order. The fourth test set comprised 1000 random permutations. Table 1 compares how frequently rational metareasoning chose the best algorithm on each test set to the results reported by Guo [14]. We estimated our method’s expected performance θ by its average performance and 95% credible intervals. Credible intervals (CI) were computed by Bayesian inference with a uniform prior, and they comprise the values with highest posterior density whose total probability is 0.95. In brief, rational metareasoning significantly outperformed Guo’s decision-tree method on all four test sets. The performance gain was highest on random permutations: rational metareasoning chose the best algorithm 99.99% rather than only 85.3% of the time. 3.2 Evaluation against Lagoudakis et al.’s method Depending on a list’s length Lagoudakis et al.’s method chooses either insertion sort, merge sort, or quicksort [15]. If merge sort or quicksort is chosen the same decision rule is applied to each of the two sublists it creates. The selection mapping from lengths to algorithms is determined by minimizing the expected runtime [15]. We evaluated rational metareasoning against Lagoudakis et al.’s recursive method on 21 versions of Guo’s Dsort5 test set [14] with 0%, 5%, · · · , 100% nearlysorted sequences. To accommodate differences in implementation and architecture, we recomputed Lagoudakis et al.’s solution for the runtimes measured on our system. Rational metareasoning chose between the five algorithms used by Guo and was trained on Guo’s Dsort4 [14]. We compare the performance of the two methods in terms of their runtime, because none of the numerous choices of recursive algorithm selection corresponds to our method’s algorithm choice. On average, our implementation of Lagoudakis et al.’s method took 102.5±0.83 seconds to sort the 21 test sets, whereas rational metareasoning finished in only 27.96 ± 0.02 seconds. Rational metareasoning was thus significantly faster (p < 10−15). Next, we restricted the sorting algorithms available to rational metareasoning to those used by Lagoudakis et al.’s method. The runtime increased to 47.90 ± 0.02 seconds, but rational metareasoning remained significantly faster than Lagoudakis 4 et al.’s method (p < 10−15). These comparisons highlight two advantages of our method: i) it can exploit presortedness, and ii) it can be used with arbitrarily many algorithms of any kind. 3.3 Discussion Rational metareasoning outperformed two state-of-the-art methods for sorting algorithm selection. Our results in the domain of sorting should be interpreted as a lower bound on the performance gain that rational metareasoning can achieve on harder problems such as combinatorial optimization, planning, and search, where the runtimes of different algorithms are more variable [12]. Future research might explore the application of our theory to these harder problems, take into account heavy-tailed runtime distributions, use better representations, and incorporate active learning. Our results show that rational metareasoning is not just theoretically sound, but it is also competitive. We can therefore use it as a normative model of human strategy selection learning. 4 Rational metareasoning as a model of human strategy selection Most previous theories of how humans learn when to use which cognitive strategy assume basic model-free reinforcement learning [16–18]. The REinforcement Learning among Cognitive Strategies model (RELACS [17]) and the Strategy Selection Learning model (SSL [18]) each postulate that people learn just one number for each cognitive strategy: the expected reward of applying it to an unknown problem and the sum of past rewards, respectively. These theories therefore predict that people cannot learn to instantly adapt their strategy to the characteristics of a new problem. By contrast, the Strategy Choice And Discovery Simulation (SCADS [16]) postulates that people separately learn about a strategy’s performance on particular types of problems and its overall performance and integrate the resulting predictions by multiplication. Our theory makes critically different assumptions about the mental representation of problems and each strategy’s performance than the three previous psychological theories. First, rational metareasoning assumes that problems are represented by multiple features that can be continuous or binary. Second, rational metareasoning postulates that people maintain separate representations of a strategy’s execution time and the quality of its solution. Third, rational metareasoning can discover non-additive interactions between features. Furthermore, rational metareasoning postulates that learning, prediction, and strategy choice are more rational than previously modeled. Since our model formalizes substantially different assumptions about mental representation and information processing, determining which theory best explains human behavior will teach us more about how the human brain represents and solves strategy selection problems. To understand when and how the predictions of our theory differ from the predictions of the three existing psychological theories, we performed computer simulations of how people would choose between sorting strategies. In order to apply the psychological theories to the selection among sorting strategies, we had to define the reward (r). We considered three notions of reward: i) correctness (r ∈{−0.1, +0.1}; these numbers are based on the SCADS model [16]), ii) correctness minus time cost (r −c · t, where t is the execution time and c is a constant), and iii) reward rate (r/t). We evaluated all nine combinations of the three theories with the three notions of reward. We provided the SCADS model with reasonable problem types: short lists (length ≤16), long lists (length ≥32), nearly-sorted lists (less than 10% inversions), and random lists (more than 25% inversions). We evaluated the performance of these nine models against the rational metareasoning in the selection between seven sorting algorithms: insertion sort, selection sort, bubble sort, shell sort, heapsort, merge sort, and quicksort. To do so, we trained each model on 1000 randomly generated lists, fixed the learned parameters and evaluated how many lists each model could sort per second. Training and test lists were generated by sampling. Sequence lengths were sampled from a Uniform({2, · · · , u}) distribution where u was 10, 100, 1000, or 10000 with equal probability. The fraction of inversions between subsequent numbers was drawn from a Beta(2, 1) distribution. We performed 100 trainand-test episodes. Sorting time was measured by selection time plus execution time. We estimated the expected sorting speed for each model by averaging. We found that while rational metareasoning achieved 88.1 ± 0.7% of the highest possible sorting speed, none of the nine alternative models achieved more than 30% of the maximal sorting speed. Thus, the time invested in metareasoning was more than offset by the time saved with the chosen strategy. 5 5 How do people choose cognitive strategies? Given that rational metareasoning outperformed the nine psychological models in strategy selection, we asked whether the mind is more adaptive than those theories assume. To answer this question, we designed an experiment for which rational metareasoning predicts distinctly different choices. 5.1 Pilot studies and simulations To design an experiment that could distinguish between our competing hypotheses, we ran two pilot studies measuring the execution time characteristics of cocktail sort (CS) respectively merge sort (MS). For each pilot study we recruited 100 participants on Amazon Mechanical Turk. In the first pilot study, the interface shown in Figure 1(a) required participants to follow the step-by-step instructions of the cocktail sort algorithm. In the second pilot study, participants had to execute merge sort with the computer interface shown in Figure 1(b). We measured their sorting times for lists of varying length and presortedness. Then, based on this data, we estimated how long comparisons and moves take using each strategy. This led to the following sorting time models: TCS = ˆtCS + εCS, ˆtCS = 19.59 + 0.19 · ncomparisons + 0.31 · nmoves, εCS ∼N(0, 0.21 · ˆt2 CS) (13) TMS = ˆtMS + εMS, ˆtMS = 13.98 + 1.10 · ncomparisons + 0.52 · nmoves, εMS ∼N(0, 0.15 · ˆt2 MS) (14) We then used these sorting time models to simulate 104 candidate strategy selection experiments according to each of the 10 models. We found several potential experiments for which rational metareasoning makes qualitatively different predictions than all of the alternative psychological theories, and we chose the one that achieved the best compromise between discriminability and duration. According to the two runtime models (Equations 13–14) and how many comparisons and moves each algorithm would perform, people should choose merge sort for long and nearly inversely sorted sequences and cocktail sort for sequences that are either nearly-sorted or short. For the chosen experimental design, the three existing psychological theories predicted that people would fail to learn this contingency; see Figure 2. By contrast, rational metareasoning predicted that adaptive strategy selection would be evident from the choices of more than 70% of our participants. Therefore, the chosen experimental design was well suited to discriminate rational metareasoning from previous theories. The next section describes the strategy choice experiment in detail. 5.2 Methods The experiment was run online1 with 100 participants recruited on Amazon Mechanical Turk and it paid $1.25. The experiment comprised three stages: training, choice, and execution. In the training stage, each participant was taught to sort lists of numbers by executing the two contrasting strategies tested in the pilot studies: cocktail sort and merge sort. On each of the 11 training trials, the participant was instructed which strategy to use. The interface enforced that he or she correctly performed each step of that strategy. The interfaces were the same as in the pilot studies (see Figure 1). For both strategies, the chosen lists comprised nearly reversely sorted lists of length 4, 8, and 16 and nearly-sorted lists of length 16 and 32. For the cocktail sort strategy, each participant was also trained on a nearly inversely sorted list with 32 elements. Participants first practiced cocktail sort for five trials and then practiced merge sort. The last two trials contrasted the two strategies on long, nearly-sorted sequences with identical length. Nearly-sorted lists were created by inserting a randomly selected element at a different random location of an ascending list. Nearly inversely sorted lists were created applying the same procedure to a descending list. In the choice phase, participants were shown 18 test lists. For each list, they were asked to choose which sorting strategy they would use, if they had to sort this sequence. Participants were told that they would have to sort one randomly selected list with the strategy they chose for it. The test lists comprised six instances of each of three kinds of sequences: long and nearly inversely sorted, long and nearly-sorted, and short and nearly-sorted. The order of these sequences was randomized across participants. In the execution phase, one of the 12 short lists was randomly selected, and the participant had to sort it using the strategy he or she had previously chosen for that list. To derive theoretical predictions, we gave each model the same information as our participants. 1http://cocosci.berkeley.edu/mturk/falk/StrategyChoice/consent.html 6 a) Cocktail sort b) Merge sort Figure 1: Interfaces used to train participants to perform (a) cocktail sort and (b) merge sort in the behavioral experiment. 5.3 Results Our participants took 24.7 ± 6.7 minutes to complete the experiment (mean ± standard deviation). The median number of errors per training sequence was 2.45, and 95% of our participants made between 0.73 and 12.55 errors per training sequence. In the choice phase, 83% of our participants were more likely to choose merge sort when it was the superior strategy (compared to trials when it was not). We can thus be 95% confident that the population frequency of this adaptive strategy choice pattern lies between 74.9% and 89.4%; see Figure 2b). This adaptive choice pattern was significantly more frequent than could be expected, if strategy choice was independent of the lists’ features (p < 10−11). This is consistent with our model’s predictions but inconsistent with the predictions of the RELACS, SSL, and SCADS models. Only rational metareasoning correctly predicted that the frequency of the adaptive strategy choice pattern would be above chance (p < 10−5 for our model and p ≥0.46 for all other models). Figure 2(b) compares the proportion of participants exhibiting this pattern with the models’ predictions. The non-overlapping credible intervals suggest that we can be 95% confident that the choices of people and rational metareasoning are more adaptive than those predicted by the three previous theories (all p < 0.001). Yet we can also be 95% confident that, at least in our experiment, people choose their strategy even more adaptively than rational metareasoning (p ≤0.02). On average, our participants chose merge sort for 4.9 of the 6 long and nearly inversely sorted sequences (81.67% of the time, 95% credible interval: [77.8%; 93.0%]), but for only 1.79 of the 6 nearly-sorted long sequences (29.83% of the time, 95% credible interval: [12.9%, 32.4%]), and for only 1.62 of the 6 nearly-sorted short sequences (27.00% of the time, 95% credible interval: [16.7%, 40.4%]); see Figure 2(a). Thus, when merge sort was superior, our participants chose it significantly more often than cocktail sort (p < 10−10). But, when merge sort was inferior, they chose cocktail sort more often than merge sort (p < 10−7). 5.4 Discussion We evaluated our rational metareasoning model of human strategy selection against nine models instantiating three psychological theories. While those nine models completely failed to predict our participants’ adaptive strategy choices, the predictions of rational metareasoning were qualitatively correct, and its choices came close to human performance. The RELACS and the SSL model failed, because they do not represent problem features and do not learn about how those features affect each strategy’s performance. The model-free learning assumed by SSL and RELACS was maladaptive because cocktail sort was faster for most training sequences, but was substantially slower for the 7 Figure 2: Pattern of strategy choices: (a) Relative frequency with which humans and models chose merge sort by list type. (b) Percentage of participants who chose merge sort more often when it was superior than when it was not. Error bars indicate 95% credible intervals. long, nearly inversely sorted test sequences. The SCADS model failed mainly because its suboptimal learning mechanism was fooled by the slight imbalance between the training examples for cocktail sort and merge sort, but also because it can neither extrapolate nor capture the non-additive interaction between length and presortedness. Instead human-like adaptive strategy selection can be achieved by learning to predict each strategy’s execution time and accuracy given features of the problem. To further elucidate the human mind’s strategy selection learning algorithm, future research will evaluate our theory against an instance-based learning model [19]. Our participants outperformed the RELACS, SSL, and SCADS models, as well as rational metareasoning in our strategy selection task. This suggests that neither psychology nor AI can yet fully account for people’s adaptive strategy selection. People’s superior performance could be enabled by a more powerful representation of the sequences, perhaps one that includes reverse-sortedness, or the ability to choose strategies based on mental simulations of their execution on the presented list. These are just two of many possibilities and more experiments are needed to unravel people’s superior performance. In contrast to the sorting strategies in our experiment, most cognitive strategies operate on internal representations. However, there are two reasons to expect our conclusions to transfer: First, the metacognitive principles of strategy selection might be domain general. Second, the strategies people use to order things mentally might be based on their sorting strategies in the same way in which mental arithmetic is based on calculating with fingers or on paper. 6 Conclusions Since neither psychology nor AI can yet fully account for people’s adaptive strategy selection, further research into how people learn to select cognitive strategies may yield not only a better understanding of human intelligence, but also better solutions to the algorithm selection problem in computer science and artificial intelligence. Our results suggest that reasoning about which strategy to use might contribute to people’s adaptive intelligence and can save more time than it takes. Since our framework is very general, it can be applied to strategy selection in all areas of human cognition including judgment and decision-making [1, 3], as well as to the discovery of novel strategies [2]. Future research will investigate human strategy selection learning in more ecological domains such as mental arithmetic, decision-making, and problem solving where people have to trade off speed versus accuracy. In conclusion, rational metareasoning is a promising theoretical framework for reverse-engineering people’s capacity for adaptive strategy selection. Acknowledgments. This work was supported by ONR MURI N00014-13-1-0341. 8 References [1] G. Gigerenzer and R. Selten, Bounded rationality: The adaptive toolbox. MIT Press, 2002. [2] R. S. Siegler, “Strategic development,” Trends in Cognitive Sciences, vol. 3, pp. 430–435, Nov. 1999. [3] J. W. Payne, J. R. Bettman, and E. J. Johnson, “Adaptive strategy selection in decision making.,” Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 14, no. 3, p. 534, 1988. [4] J. N. Marewski and D. Link, “Strategy selection: An introduction to the modeling challenge,” Wiley Interdisciplinary Reviews: Cognitive Science, vol. 5, no. 1, pp. 39–59, 2014. [5] J. R. Rice, “The algorithm selection problem,” Advances in Computers, vol. 15, pp. 65–118, 1976. [6] S. Russell and E. Wefald, “Principles of metareasoning,” Artificial Intelligence, vol. 49, no. 1-3, pp. 361– 395, 1991. [7] N. Hay, S. Russell, D. Tolpin, and S. Shimony, “Selecting computations: Theory and applications,” in Uncertainty in Artificial Intelligence: Proceedings of the Twenty-Eighth Conference (N. de Freitas and K. Murphy, eds.), (P.O. Box 866 Corvallis, Oregon 97339 USA), AUAI Press, 2012. [8] D. Harada and S. Russell, “Meta-level reinforcement learning,” in NIPS’98 Workshop on Abstraction and Hierarchy in Reinforcement Learning, 1998. [9] T. Jaakkola and M. Jordan, “A variational approach to Bayesian logistic regression models and their extensions,” in Sixth International Workshop on Artificial Intelligence and Statistics, 1997. [10] R. E. Kass and A. E. Raftery, “Bayes factors,” Journal of the American Statistical Association, vol. 90, pp. 773–795, June 1995. [11] W. D. Penny and G. R. Ridgway, “Efficient posterior probability mapping using Savage-Dickey ratios,” PLoS ONE, vol. 8, no. 3, pp. e59655+, 2013. [12] L. Kotthoff, “Algorithm selection for combinatorial search problems: A survey,” AI Magazine, 2014. [13] K. A. Smith-Miles, “Cross-disciplinary perspectives on meta-learning for algorithm selection,” ACM Comput. Surv., vol. 41, Jan. 2009. [14] H. Guo, Algorithm selection for sorting and probabilistic inference: a machine learning-based approach. PhD thesis, Kansas State University, 2003. [15] M. G. Lagoudakis, M. L. Littman, and R. Parr, “Selecting the right algorithm,” in Proceedings of the 2001 AAAI Fall Symposium Series: Using Uncertainty within Computation, Cape Cod, MA, 2001. [16] J. Shrager and R. S. Siegler, “SCADS: A model of children’s strategy choices and strategy discoveries,” Psychological Science, vol. 9, pp. 405–410, Sept. 1998. [17] I. Erev and G. Barron, “On adaptation, maximization, and reinforcement learning among cognitive strategies.,” Psychological review, vol. 112, pp. 912–931, Oct. 2005. [18] J. Rieskamp and P. E. Otto, “SSL: A theory of how people learn to select strategies.,” Journal of Experimental Psychology: General, vol. 135, pp. 207–236, May 2006. [19] C. Gonzalez and V. Dutt, “Instance-based learning: Integrating sampling and repeated decisions from experience,” Psychological Review, vol. 118, no. 4, pp. 523–551, 2011. 9
|
2014
|
196
|
5,287
|
PAC-Bayesian AUC classification and scoring James Ridgway∗ CREST and CEREMADE University Dauphine james.ridgway@ensae.fr Pierre Alquier CREST (ENSAE) pierre.alquier@ucd.ie Nicolas Chopin CREST (ENSAE) and HEC Paris nicolas.chopin@ensae.fr Feng Liang University of Illinois at Urbana-Champaign liangf@illinois.edu Abstract We develop a scoring and classification procedure based on the PAC-Bayesian approach and the AUC (Area Under Curve) criterion. We focus initially on the class of linear score functions. We derive PAC-Bayesian non-asymptotic bounds for two types of prior for the score parameters: a Gaussian prior, and a spike-and-slab prior; the latter makes it possible to perform feature selection. One important advantage of our approach is that it is amenable to powerful Bayesian computational tools. We derive in particular a Sequential Monte Carlo algorithm, as an efficient method which may be used as a gold standard, and an Expectation-Propagation algorithm, as a much faster but approximate method. We also extend our method to a class of non-linear score functions, essentially leading to a nonparametric procedure, by considering a Gaussian process prior. 1 Introduction Bipartite ranking (scoring) amounts to rank (score) data from binary labels. An important problem in its own right, bipartite ranking is also an elegant way to formalise classification: once a score function has been estimated from the data, classification reduces to chooses a particular threshold, which determine to which class is assigned each data-point, according to whether its score is above or below that threshold. It is convenient to choose that threshold only once the score has been estimated, so as to get finer control of the false negative and false positive rates; this is easily achieved by plotting the ROC (Receiver operating characteristic) curve. A standard optimality criterion for scoring is AUC (Area Under Curve), which measures the area under the ROC curve. AUC is appealing for at least two reasons. First, maximising AUC is equivalent to minimising the L1 distance between the estimated score and the optimal score. Second, under mild conditions, Cortes and Mohri [2003] show that AUC for a score s equals the probability that s(X−) < s(X+) for X−(resp. X+) a random draw from the negative (resp. positive class). Yan et al. [2003] observed AUC-based classification handles much better skewed classes (say the positive class is much larger than the other) than standard classifiers, because it enforces a small score for all members of the negative class (again assuming the negative class is the smaller one). One practical issue with AUC maximisation is that the empirical version of AUC is not a continuous function. One way to address this problem is to ”convexify” this function, and study the properties of so-obtained estimators [Cl´emenc¸on et al., 2008a]. We follow instead the PAC-Bayesian approach in this paper, which consists of using a random estimator sampled from a pseudo-posterior distribution that penalises exponentially the (in our case) AUC risk. It is well known [see e.g. the monograph of Catoni, 2007] that the PAC-Bayesian approach comes with a set of powerful technical tools to ∗http://www.crest.fr/pagesperso.php?user=3328 1 establish non-asymptotic bounds; the first part of the paper derive such bounds. A second advantage however of this approach, as we show in the second part of the paper, is that it is amenable to powerful Bayesian computational tools, such as Sequential Monte Carlo and Expectation Propagation. 2 Theoretical bounds from the PAC-Bayesian Approach 2.1 Notations The data D consist in the realisation of n IID (independent and identically distributed) pairs (Xi, Yi) with distribution P, and taking values in Rd×{−1, 1}. Let n+ = Pn i=1 1{Yi = +1}, n−= n−n+. For a score function s : Rd →R, the AUC risk and its empirical counter-part may be defined as: R(s) = P(X,Y ),(X′,Y ′)∼P [{s(X) −s(X′)}(Y −Y ′) < 0] , Rn(s) = 1 n(n −1) X i̸=j 1 [{s(Xi) −s(Xj)}(Yi −Yj) < 0] . Let σ(x) = E(Y |X = x), ¯R = R(σ) and ¯Rn = Rn(σ). It is well known that σ is the score that minimise R(s), i.e. R(s) ≥¯R = R(σ) for any score s. The results of this section apply to the class of linear scores, sθ(x) = ⟨θ, x⟩, where ⟨θ, x⟩= θT x denotes the inner product. Abusing notations, let R(θ) = R(sθ), Rn(θ) = Rn(sθ), and, for a given prior density πξ(θ) that may depend on some hyperparameter ξ ∈Ξ, define the Gibbs posterior density (or pseudo-posterior) as πξ,γ(θ|D) := πξ(θ) exp {−γRn(θ)} Zξ,γ(D) , Zξ,γ(D) = Z Rd πξ(˜θ) exp n −γRn(˜θ) o d˜θ for γ > 0. Both the prior and posterior densities are defined with respect to the Lebesgue measure over Rd. 2.2 Assumptions and general results Our general results require the following assumptions. Definition 2.1 We say that Assumption Dens(c) is satisfied for c > 0 if P(⟨X1 −X2, θ⟩≥0, ⟨X1 −X2, θ′⟩≤0) ≤c∥θ −θ′∥ for any θ and θ′ ∈Rd such that ∥θ∥= ∥θ′∥= 1. This is a mild Assumption, which holds for instance as soon as (X1 −X2)/∥X1 −X2∥admits a bounded probability density; see the supplement. Definition 2.2 (Mammen & Tsybakov margin assumption) We say that Assumption MA(κ, C) is satisfied for κ ∈[1, +∞] and C ≥1 if E (qθ 1,2)2 ≤C R(θ) −R 1 κ where qθ i,j = 1{⟨θ, Xi −Xj⟩(Yi −Yj) < 0} −1{[σ(Xi) −σ(Xj)](Yi −Yj) < 0} −R(θ) + R. This assumption was introduced for classification by Mammen and Tsybakov [1999], and used for ranking by Cl´emenc¸on et al. [2008b] and Robbiano [2013] (see also a nice discussion in Lecu´e [2007]). The larger κ, the less restrictive MA(κ, C). In fact, MA(∞, C) is always satisfied for C = 4. For a noiseless classification task (i.e. σ(Xi)Yi ≥0 almost surely), R = 0, E((qθ 1,2)2) = Var(qθ 1,2) = E[1{⟨θ, X1 −X2⟩(Yi −Yj) < 0}] = R(θ) −R and MA(1, 1) holds. More generally, MA(1, C) is satisfied as soon as the noise is small; see the discussion in Robiano 2013 (Proposition 5 p. 1256) for a formal statement. From now, we focus on either MA(1, C) or MA(∞, C), C ≥1. It is possible to prove convergence under MA(κ, 1) 2 for a general κ ≥1, but at the price of complications regarding the choice of γ; see Catoni [2007], Alquier [2008] and Robbiano [2013]. We use the classical PAC-Bayesian methodology initiated by McAllester [1998]; Shawe-Taylor and Williamson [1997] (see Alquier [2008]; Catoni [2007] for a complete survey and more recent advances) to get the following results. Proof of these and forthcoming results may be found in the supplement. Let K(ρ, π) denotes the Kullback-Liebler divergence, K(ρ, π) = R ρ(dθ) log{ dρ dπ(θ)} if ρ << π, ∞otherwise, and denote M1 + the set of probability distributions ρ(dθ). Lemma 2.1 Assume that MA(1, C) holds with C ≥1. For any fixed γ with 0 < γ ≤(n−1)/(8C), for any ε > 0, with probability at least 1 −ε on the drawing of the data D, Z R(θ)πξ,γ(θ|D)dθ −R ≤2 inf ρ∈M1 + (Z R(θ)ρ(dθ) −R + 2K(ρ, π) + log 4 ε γ ) . Lemma 2.2 Assume MA(∞, C) with C ≥1. For any fixed γ with 0 < γ ≤(n −1)/8, for any ϵ > 0 with probability 1 −ϵ on the drawing of D, Z R(θ)πξ,γ(θ|D)dθ −¯R ≤ inf ρ∈M1 + Z R(θ)ρ(dθ) −¯R + 2K(ρ, π) + log 2 ϵ γ + 16γ n −1. Both lemmas bound the expected risk excess, for a random estimator of θ generated from πξ,γ(θ|D). 2.3 Independent Gaussian Prior We now specialise these results to the prior density πξ(θ) = Qd i=1 ϕ(θi; 0, ϑ), i.e. a product of independent Gaussian distributions N(0, ϑ); ξ = ϑ in this case. Theorem 2.3 Assume MA(1, C), C ≥1, Dens(c), c > 0, and take ϑ = 2 d(1 + 1 n2d), γ = (n −1)/8C, then there exists a constant α = α(c, C, d) such that for any ϵ > 0, with probability 1 −ϵ, Z R(θ)πγ(θ|D)dθ −¯R ≤2 inf θ0 R(θ0) −¯R + αd log(n) + log 4 ϵ n −1 . Theorem 2.4 Assume MA(∞, C), C ≥1, Dens(c) c > 0, and take ϑ = 2 d(1 + 1 n2d), γ = C p dn log(n), there exists a constant α = α(c, C, d) such that for any ϵ > 0, with probability 1−ϵ, Z R(θ)πγ(θ|D)dθ −¯R ≤inf θ0 R(θ0) −¯R + α p d log(n) + log 2 ϵ √n . The proof of these results is provided in the supplementary material. It is known that, under MA(κ, C), the rate (d/n) κ 2κ−1 is minimax-optimal for classification problems, see Lecu´e [2007]. Following Robbiano [2013] we conjecturate that this rate is also optimal for ranking problems. 2.4 Spike and slab prior for feature selection The independent Gaussian prior considered in the previous section is a natural choice, but it does not accommodate sparsity, that is, the possibility that only a small subset of the components of Xi actually determine the membership to either class. For sparse scenarios, one may use the spike and slab prior of Mitchell and Beauchamp [1988], George and McCulloch [1993], πξ(θ) = d Y i=1 [pϕ(θi; 0, v1) + (1 −p)ϕ(θi; 0, v0)] with ξ = (p, v0, v1) ∈[0, 1] × (R+)2, and v0 ≪v1, for which we obtain the following result. Note ∥θ∥0 is the number of non-zero coordinates for θ ∈Rd. 3 Theorem 2.5 Assume MA(1, C) holds with C ≥1, Dens(c) holds with c > 0, and take p = 1 − exp(−1/d), v0 ≤1/(2nd log(d)), and γ = (n−1)/(8C). Then there is a constant α = α(C, v1, c) such that for any ε > 0, with probability at least 1 −ε on the drawing of the data D, Z R(θ)πγ(dθ|D) −R ≤2 inf θ0 ( R(θ0) −R + α∥θ0∥0 log(nd) + log 4 ε 2(n −1) ) . Compared to Theorem 2.3, the bound above increases logarithmically rather than linearly in d, and depends explicitly on ∥θ∥0, the sparsity of θ. This suggests that the spike and slab prior should lead to better performance than the Gaussian prior in sparse scenarios. The rate ∥θ∥0 log(d)/n is the same as the one obtained in sparse regression, see e.g. B¨uhlmann and van de Geer [2011]. Finally, note that if v0 →0, we recover the more standard prior which assigns a point mass at zero for every component. However this leads to a pseudo-posterior which is a mixture of 2d components that mix Dirac masses and continuous distributions, and thus which is more difficult to approximate (although see the related remark in Section 3.4 for Expectation-Propagation). 3 Practical implementation of the PAC-Bayesian approach 3.1 Choice of hyper-parameters Theorems 2.3, 2.4, and 2.5 propose specific values for hyper-parameters γ and ξ, but these values depend on some unknown constant C. Two data-driven ways to choose γ and ξ are (i) cross-validation (which we will use for γ), and (ii) (pseudo-)evidence maximisation (which we will use for ξ). The latter may be justified from intermediate results of our proofs in the supplement, which provide an empirical bound on the expected risk: Z R(θ)πξ,γ(θ|D)dθ −¯R ≤Ψγ,n inf ρ∈M1 + Z Rn(θ)ρ(dθ) −¯Rn + 2K(ρ, π) + log 2 ϵ γ with Ψγ,n ≤2. The right-hand side is minimised at ρ(dθ) = πξ,γ(θ|D)dθ, and the so-obtained bound is −Ψγ,n log(Zξ,γ(D))/γ plus constants. Minimising the upper bound with respect to hyperparameter ξ is therefore equivalent to maximising log Zξ,γ(D) with respect to ξ. This is of course akin to the empirical Bayes approach that is commonly used in probabilistic machine learning. Regarding γ the minimization is more cumbersome because the dependence with the log(2/ϵ) term and Ψn,γ, which is why we recommend cross-validation instead. It seems noteworthy that, beside Alquier and Biau [2013], very few papers discuss the practical implementation of PAC-Bayes, beyond some brief mention of MCMC (Markov chain Monte Carlo). However, estimating the normalising constant of a target density simulated with MCMC is notoriously difficult. In addition, even if one decides to fix the hyperparameters to some arbitrary value, MCMC may become slow and difficult to calibrate if the dimension of the sampling space becomes large. This is particularly true if the target does not (as in our case) have some specific structure that make it possible to implement Gibbs sampling. The two next sections discuss two efficient approaches that make it possible to approximate both the pseudo-posterior πξ,γ(θ|D) and its normalising constant, and also to perform cross-validation with little overhead. 3.2 Sequential Monte Carlo Given the particular structure of the pseudo-posterior πξ,γ(θ|D), a natural approach to simulate from πξ,γ(θ|D) is to use tempering SMC [Sequential Monte Carlo Del Moral et al., 2006] that is, define a certain sequence γ0 = 0 < γ1 < . . . < γT , start by sampling from the prior πξ(θ), then applies successive importance sampling steps, from πξ,γt−1(θ|D) to πξ,γt(θ|D), leading to importance weights proportional to: πξ,γt(θ|D) πξ,γt−1(θ|D) ∝exp {−(γt −γt−1)Rn(θ)} . When the importance weights become too skewed, one rejuvenates the particles through a resampling step (draw particles randomly with replacement, with probability proportional to the weights) and a move step (move particles according to a certain MCMC kernel). 4 One big advantage of SMC is that it is very easy to make it fully adaptive. For the choice of the successive γt, we follow Jasra et al. [2007] in solving numerically (1) in order to impose that the Effective sample size has a fixed value. This ensures that the degeneracy of the weights always remain under a certain threshold. For the MCMC kernel, we use a Gaussian random walk Metropolis step, calibrated on the covariance matrix of the resampled particles. See Algorithm 1 for a summary. Algorithm 1 Tempering SMC Input N (number of particles), τ ∈(0, 1) (ESS threshold), κ > 0 (random walk tuning parameter) Init. Sample θi 0 ∼πξ(θ) for i = 1 to N, set t ←1, γ0 = 0, Z0 = 1. Loop a. Solve in γt the equation {PN i=1 wt(θi t−1)}2 PN i=1{wt(θi t−1))2} = τN, wt(θ) = exp[−(γt −γt−1)Rn(θ)] (1) using bisection search. If γt ≥γT , set ZT = Zt−1 × n 1 N PN i=1 wt(θi t−1) o , and stop. b. Resample: for i = 1 to N, draw Ai t in 1, . . . , N so that P(Ai t = j) = wt(θj t−1)/ PN k=1 wt(θk t−1); see Algorithm 1 in the supplement. c. Sample θi t ∼Mt(θAi t t−1, dθ) for i = 1 to N where Mt is a MCMC kernel that leaves invariant πt; see Algorithm 2 in the supplement for an instance of such a MCMC kernel, which takes as an input S = κˆΣ, where ˆΣ is the covariance matrix of the θAi t t−1. d. Set Zt = Zt−1 × n 1 N PN i=1 wt(θi t−1) o . In our context, tempering SMC brings two extra advantages: it makes it possible to obtain samples from πξ,γ(θ|D) for a whole range of values of γ, rather than a single value. And it provides an approximation of Zξ,γ(D) for the same range of γ values, through the quantity Zt defined in Algorithm 1. 3.3 Expectation-Propagation (Gaussian prior) The SMC sampler outlined in the previous section works fairly well, and we will use it as gold standard in our simulations. However, as any other Monte Carlo method, it may be too slow for large datasets. We now turn our attention to EP [Expectation-Propagation Minka, 2001], a general framework to derive fast approximations to target distributions (and their normalising constants). First note that the pseudo-posterior may be rewritten as: πξ,γ(θ|D) = 1 Zξ,γ(D)πξ(θ) × Y i,j fij(θ), fij(θ) = exp [−γ′1{⟨θ, Xi −Xj⟩< 0}] where γ′ = γ/n+n−, and the product is over all (i, j) such that Yi = 1, Yj = −1. EP generates an approximation of this target distribution based on the same factorisation: q(θ) ∝q0(θ) Y i,j qij(θ), qij(θ) = exp{−1 2θT Qijθ + rT ijθ}. We consider in the section the case where the prior is Gaussian, as in Section 2.3. Then one may set q0(θ) = πξ(θ). The approximating factors are un-normalised Gaussian densities (under a natural parametrisation), leading to an overall approximation that is also Gaussian, but other types of exponential family parametrisations may be considered; see next section and Seeger [2005]. EP updates iteratively each site qij (that is, it updates the parameters Qij and rij), conditional on all the sites, by matching the moments of q with those of the hybrid distribution hij(θ) ∝q(θ)fij(θ) qij(θ) ∝q0(θ)fij(θ) Y (k,l)̸=(i,j) fkl(θ) 5 where again the product is over all (k, l) such that Yk = 1, Yl = −1, and (k, l) ̸= (i, j). We refer to the supplement for a precise algorithmic description of our EP implementation. We highlight the following points. First, the site update is particularly simple in our case: hij(θ) ∝exp{θT rh ij −1 2θT Qh ijθ} exp [−γ′1{⟨θ, Xi −Xj⟩< 0}] , with rh ij = P (k,l)̸=(i,j) rkl, Qh ij = P (k,l)̸=(i,j) Qkl, which may be interpreted as: θ conditional on T(θ) = ⟨θ, Xi −Xj⟩has a d −1-dimensional Gaussian distribution, and the distribution of T(θ) is that of a one-dimensional Gaussian penalised by a step function. The two first moments of this particular hybrid may therefore be computed exactly, and in O(d2) time, as explained in the supplement. The updates can be performed efficiently using the fact that the linear combination (Xi −Xj)θ is a one dimensional Gaussian. For our numerical experiment we used a parallel version of EP Van Gerven et al. [2010]. The complexity of our EP implementation is O(n+n−d2 + d3). Second, EP offers at no extra cost an approximation of the normalising constant Zξ,γ(D) of the target πξ,γ(θ|D); in fact, one may even obtain derivatives of this approximated quantity with respect to hyper-parameters. See again the supplement for more details. Third, in the EP framework, cross-validation may be interpreted as dropping all the factors qij that depend on a given data-point Xi in the global approximation q. This makes it possible to implement cross-validation at little extra cost [Opper and Winther, 2000]. 3.4 Expectation-Propagation (spike and slab prior) To adapt our EP algorithm to the spike and slab prior of Section 2.4, we introduce latent variables Zk = 0/1 which ”choose” for each component θk whether it comes from a slab, or from a spike, and we consider the joint target πξ,γ(θ, z|D) ∝ ( d Y k=1 B(zk; p)N(θk; 0, vzk) ) exp − γ n+n− X ij 1{⟨θ, Xi −Xj⟩> 0} . On top of the n+n−Gaussian sites defined in the previous section, we add a product of d sites to approximate the prior. Following Hernandez-Lobato et al. [2013], we use qk(θk, zk) = exp zk log pk 1 −pk −1 2θ2 kuk + vkθk that is a (un-normalised) product of an independent Bernoulli distribution for zk, times a Gaussian distribution for θk. Again that the site update is fairly straightforward, and may be implemented in O(d2) time. See the supplement for more details. Another advantage of this formulation is that we obtain a Bernoulli approximation of the marginal pseudo-posterior πξ,γ(zi = 1|D) to use in feature selection. Interestingly taking v0 to be exactly zero also yield stable results corresponding to the case where the spike is a Dirac mass. 4 Extension to non-linear scores To extend our methodology to non-linear score functions, we consider the pseudo-posterior πξ,γ(ds|D) ∝πξ(ds) exp − γ n+n− X i∈D+, j∈D− 1{s(Xi) −s(Xj) > 0} where πξ(ds) is some prior probability measure with respect to an infinite-dimensional functional class. Let si = s(Xi), s1:n = (s1, . . . , sn) ∈Rn, and assume that πξ(ds) is a GP (Gaussian process) associated to some kernel kξ(x, x′), then using a standard trick in the GP literature [Rasmussen and Williams, 2006], one may derive the marginal (posterior) density (with respect to 6 the n-dimensional Lebesgue measure) of s1:n as πξ,γ(s1:n|D) ∝Nd (s1:n; 0, Kξ) exp − γ n+n− X i∈D+, j∈D− 1{si −sj > 0} where Nd (s1:n; 0, Kξ) denotes the probability density of the N(0, Kξ) distribution, and Kξ is the n × n matrix (kξ(Xi, Xj))n i,j=1. This marginal pseudo-posterior retains essentially the structure of the pseudo-posterior πξ,γ(θ|D) for linear scores, except that the “parameter” s1:n is now of dimension n. We can apply straightforwardly the SMC sampler of Section 3.2, and the EP algorithm of 3.3, to this new target distribution. In fact, for the EP implementation, the particular simple structure of a single site: exp [−γ′1{si −sj > 0}] makes it possible to implement a site update in O(1) time, leading to an overall complexity O(n+n−+ n3) for the EP algorithm. Theoretical results for this approach could be obtained by applying lemmas from e.g. van der Vaart and van Zanten [2009], but we leave this for future study. 5 Numerical Illustration Figure 1 compares the EP approximation with the output of our SMC sampler, on the well-known Pima Indians dataset and a Gaussian prior. Marginal first and second order moments essentially match; see the supplement for further details. The subsequent results are obtained with EP. 0.0 0.5 1.0 −2 −1 0 (a) θ1 0.00 0.25 0.50 0.75 −4 −3 −2 −1 0 (b) θ2 0.0 0.5 1.0 1.5 −1 0 1 (c) θ3 Figure 1: EP Approximation (green), compared to SMC (blue) of the marginal posterior of the first three coefficients, for Pima dataset (see the supplement for additional analysis). We now compare our PAC-Bayesian approach (computed with EP) with Bayesian logistic regression (to deal with non-identifiable cases), and with the rankboost algorithm [Freund et al., 2003] on different datasets1; note that Cortes and Mohri [2003] showed that the function optimised by rankbook is AUC. As mentioned in Section 3, we set the prior hyperparameters by maximizing the evidence, and we use cross-validation to choose γ. To ensure convergence of EP, when dealing with difficult sites, we use damping [Seeger, 2005]. The GP version of the algorithm is based on a squared exponential kernel. Table 1 summarises the results; balance refers to the size of the smaller class in the data (recall that the AUC criterion is particularly relevant for unbalanced classification tasks), EP-AUC (resp. GPEP-AUC) refers to the EP approximation of the pseudo-posterior based on our Gaussian prior (resp. Gaussian process prior). See also Figure 2 for ROC curve comparisons, and Table 1 in the supplement for a CPU time comparison. Note how the GP approach performs better for the colon data, where the number of covariates (2000) is very large, but the number of observations is only 40. It seems also that EP gives a better approximation in this case because of the lower dimensionality of the pseudo-posterior (Figure 2b). 1All available at http://archive.ics.uci.edu/ml/ 7 Dataset Covariates Balance EP-AUC GPEP-AUC Logit Rankboost Pima 7 34% 0.8617 0.8557 0.8646 0.8224 Credit 60 28% 0.7952 0.7922 0.7561 0.788 DNA 180 22% 0.9814 0.9812 0.9696 0.9814 SPECTF 22 50% 0.8684 0.8545 0.8715 0.8684 Colon 2000 40% 0.7034 0.75 0.73 0.5935 Glass 10 1% 0.9843 0.9629 0.9029 0.9436 Table 1: Comparison of AUC. The Glass dataset has originally more than two classes. We compare the “silicon” class against all others. 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 (a) Rankboost vs EP-AUC on Pima 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 (b) Rankboost vs GPEPAUC on Colon 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 (c) Logistic vs EP-AUC on Glass Figure 2: Some ROC curves associated to the example described in a more systematic manner in table 1. In black is always the PAC version. Finally, we also investigate feature selection for the DNA dataset (180 covariates) using a spike and slab prior. The regularization plot (3a) shows how certain coefficients shrink to zero as the spike’s variance v0 goes to zero, allowing for some sparsity. The aim of a positive variance in the spike is to absorb negligible effects into it [Roˇckov´a and George, 2013]. We observe this effect on figure 3a where one of the covariates becomes positive when v0 decreases. −0.3 −0.2 −0.1 0.0 0.1 1e−04 1e−02 v0 θ (a) Regularization plot GGGGGGGGGGG GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG G GGGGGGGG G GGGG GGGGGGGGG G GG G GG G GG GGGGG G G G G G G G G G GGGGGGGGGGGGGG G GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG GGGGGGGGGGGGGGGGG GGGGGGGGGGGGGGGGGGGGGGGGG GGGGGGGGGG G G GG GGG GGGGG GGG G GG G G GG GGGGG G GG GGG G G G G G G GG G G G G G G G G G GG GG G GG G GG G GG G GG G G GGG G G G G G G G G G G GG G GG G GG G GG GG G G GG GGG GG GGG GGG GGGGGGG GGGGGGGG GGG G GGG GGGGG GGGGG GGG G GGGG GGGGGGG GGGG GGG GG GGG GG GGGGGGGGGGG GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG G GGGGGGGG G GGGG GGGGGGGGG G GG G GG G GG GGGGG G G G G G G G G G GGGGGGGGGGGGGG G GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG GGGGGGGGGGGGGGGGG GGGGGGGGGGGGGGGGGGGGGGGGG −0.1 0.0 0 50 100 150 V1 θ (b) Estimate Figure 3: Regularization plot for v0 ∈ 10−6, 0.1 and estimation for v0 = 10−6 for DNA dataset; blue circles denote posterior probabilities ≥0.5. 6 Conclusion The combination of the PAC-Bayesian theory and Expectation-Propagation leads to fast and efficient AUC classification algorithms, as observed on a variety of datasets, some of them very unbalanced. Future work may include extending our approach to more general ranking problems (e.g. multiclass), establishing non-asymptotic bounds in the nonparametric case, and reducing the CPU time by considering only a subset of all the pairs of datapoints. 8 Bibliography P. Alquier. Pac-bayesian bounds for randomized empirical risk minimizers. Mathematical Methods of Statistics, 17(4):279–304, 2008. P. Alquier and G. Biau. Sparse single-index model. J. Mach. Learn. Res., 14(1):243–280, 2013. P. B¨uhlmann and S. van de Geer. Statistics for High-Dimensionnal Data. Springer, 2011. O. Catoni. PAC-Bayesian Supervised Classification, volume 56. IMS Lecture Notes & Monograph Series, 2007. S. Cl´emenc¸on, G. Lugosi, and N. Vayatis. Ranking and empirical minimization of U-statistics. Ann. Stat., 36(2):844–874, 04 2008a. S. Cl´emenc¸on, V.C. Tran, and H. De Arazoza. A stochastic SIR model with contact-tracincing: large population limits and statistical inference. Journal of Biological Dynamics, 2(4):392–414, 2008b. C. Cortes and M. Mohri. Auc optimization vs. error rate minimization. In NIPS, volume 9, 2003. P. Del Moral, A. Doucet, and A. Jasra. Sequential Monte Carlo samplers. J. R. Statist. Soc. B, 68 (3):411–436, 2006. ISSN 1467-9868. Y. Freund, R. Iyer, R.E Schapire, and Y. Singer. An efficient boosting algorithm for combining preferences. J. Mach. Learn. Res., 4:933–969, 2003. E.I. George and R.E. McCulloch. Variable selection via Gibbs sampling. J. Am. Statist. Assoc., 88 (423):pp. 881–889, 1993. D. Hernandez-Lobato, J. Hernandez-Lobato, and P. Dupont. Generalized Spike-and-Slab Priors for Bayesian Group Feature Selection Using Expectation Propagation . J. Mach. Learn. Res., 14: 1891–1945, 2013. A. Jasra, D. Stephens, and C. Holmes. On population-based simulation for static inference. Statist. Comput., 17(3):263–279, 2007. G. Lecu´e. M´ethodes d’agr´egation: optimalit´e et vitesses rapides. Ph.D. thesis, Universit´e Paris 6, 2007. E. Mammen and A. Tsybakov. Smooth discrimination analysis. Ann. Stat., 27(6):1808–1829, 12 1999. D.A McAllester. Some PAC-Bayesian theorems. In Proceedings of the eleventh annual conference on Computational learning theory, pages 230–234. ACM, 1998. T. Minka. Expectation Propagation for approximate Bayesian inference. In Proc. 17th Conf. Uncertainty Artificial Intelligence, UAI ’01, pages 362–369. Morgan Kaufmann Publishers Inc., 2001. T. J Mitchell and J. Beauchamp. Bayesian variable selection in linear regression. J. Am. Statist. Assoc., 83(404):1023–1032, 1988. M. Opper and O. Winther. Gaussian Processes for Classification: Mean-field Algorithms. Neural Computation, 12(11):2655–2684, November 2000. C. Rasmussen and C. Williams. Gaussian processes for Machine Learning. MIT press, 2006. S. Robbiano. Upper bounds and aggregation in bipartite ranking. Elec. J. of Stat., 7:1249–1271, 2013. V. Roˇckov´a and E. George. EMVS: The EM Approach to Bayesian Variable Selection. J. Am. Statist. Assoc., 2013. M. Seeger. Expectation propagation for exponential families. Technical report, U. of California, 2005. J. Shawe-Taylor and R.C. Williamson. A PAC analysis of a Bayesian estimator. In Proc. conf. Computat. learn. theory, pages 2–9. ACM, 1997. A.W. van der Vaart and J.H. van Zanten. Adaptive Bayesian estimation using a Gaussian random field with inverse Gamma bandwidth. Ann. Stat., pages 2655–2675, 2009. M. A.J. Van Gerven, B. Cseke, F. P. de Lange, and T. Heskes. Efficient Bayesian multivariate fMRI analysis using a sparsifying spatio-temporal prior. NeuroImage, 50:150–161, 2010. L. Yan, R. Dodier, M. Mozer, and R. Wolniewicz. Optimizing classifier performance via an approximation to the Wilcoxon-Mann-Whitney statistic. Proc. 20th Int. Conf. Mach. Learn., pages 848–855, 2003. 9
|
2014
|
197
|
5,288
|
Probabilistic Differential Dynamic Programming Yunpeng Pan and Evangelos A. Theodorou Daniel Guggenheim School of Aerospace Engineering Institute for Robotics and Intelligent Machines Georgia Institute of Technology Atlanta, GA 30332 ypan37@gatech.edu, evangelos.theodorou@ae.gatech.edu Abstract We present a data-driven, probabilistic trajectory optimization framework for systems with unknown dynamics, called Probabilistic Differential Dynamic Programming (PDDP). PDDP takes into account uncertainty explicitly for dynamics models using Gaussian processes (GPs). Based on the second-order local approximation of the value function, PDDP performs Dynamic Programming around a nominal trajectory in Gaussian belief spaces. Different from typical gradientbased policy search methods, PDDP does not require a policy parameterization and learns a locally optimal, time-varying control policy. We demonstrate the effectiveness and efficiency of the proposed algorithm using two nontrivial tasks. Compared with the classical DDP and a state-of-the-art GP-based policy search method, PDDP offers a superior combination of data-efficiency, learning speed, and applicability. 1 Introduction Differential Dynamic Programming (DDP) is a powerful trajectory optimization approach. Originally introduced in [1], DDP generates locally optimal feedforward and feedback control policies along with an optimal state trajectory. Compared with global optimal control approaches, the local optimal DDP shows superior computational efficiency and scalability to high-dimensional problems. In the last decade, variations of DDP have been proposed in both control and machine learning communities [2][3][4][5][6]. Recently, DDP was applied for high-dimensional policy search which achieved promising results in challenging control tasks [7]. DDP is derived based on linear approximations of the nonlinear dynamics along state and control trajectories, therefore it relies on accurate and explicit dynamics models. However, modeling a dynamical system is in general a challenging task and model uncertainty is one of the principal limitations of model-based methods. Various parametric and semi-parametric approaches have been developed to address these issues, such as minimax DDP using Receptive Field Weighted Regression (RFWR) by Morimoto and Atkeson [8], and DDP using expert-demonstrated trajectories by Abbeel et al. [9]. Motivated by the complexity of the relationships between states, controls and observations in autonomous systems, in this work we take a Bayesian non-parametric approach using Gaussian Processes (GPs). Over last few years, GP-based control and Reinforcement Learning (RL) algorithms have increasingly drawn more attention in control theory and machine learning communities. For instance, the works by Rasmussen et al.[10], Nguyen-Tuong et al.[11], Deisenroth et al.[12][13][14] and Hemakumara et al.[15] have demonstrated the remarkable applicability of GP-based control and RL methods in robotics. In particular, a recently proposed GP-based policy search framework called PILCO, developed by Deisenroth and Rasmussen [13] (an improved version has been developed by Deisenroth, Fox and Rasmussen [14]) has achieved unprecedented performances in terms of data1 efficiency and policy learning speed. PILCO as well as most gradient-based policy search algorithms require iterative methods (e.g.,CG or BFGS) for solving non-convex optimization to obtain optimal policies. The proposed approach does not require a policy parameterization. Instead PDDP finds a linear, time varying control policy based on Bayesian non-parametric representation of the dynamics and outperforms PILCO in terms of control learning speed while maintaining a comparable data-efficiency. 2 Proposed Approach The proposed PDDP framework consists of 1) a Bayesian non-parametric representation of the unknown dynamics; 2) local approximations of the dynamics and value functions; 3) locally optimal controller learning. 2.1 Problem formulation We consider a general unknown stochastic system described by the following differential equation dx = f(x, u)dt + C(x, u)dω, x(t0) = x0, dω ∼N(0, Σω), (1) where x ∈Rn is the state, u ∈Rm is the control, t is time and ω ∈Rp is standard Brownian motion noise. The trajectory optimization problem is defined as finding a sequence of state and controls that minimize the expected cost Jπ(x(t0)) = E h x(T) + Z T t0 L x(t), π(x(t)), t dt , (2) where h(x(T)) is the terminal cost, L(x(t), π(x(t)), t) is the instantaneous cost rate, u(t) = π(x(t)) is the control policy. The cost Jπ(x(t0)) is defined as the expectation of the total cost accumulated from t0 to T. For the rest of our analysis, we denote xk = x(tk) in discrete-time where k = 0, 1, ..., H is the time step, we use this subscript rule for other variables as well. 2.2 Probabilistic dynamics model learning The continuous functional mapping from state-control pair ˜x = (x, u) ∈Rn+m to state transition dx can be viewed as an inference with the goal of inferring dx given ˜x. We view this inference as a nonlinear regression problem. In this subsection, we introduce the Gaussian processes (GP) approach to learning the dynamics model in (1). A GP is defined as a collection of random variables, any finite number subset of which have a joint Gaussian distribution. Given a sequence of state-control pairs ˜X = {(x0, u0), . . . (xH, uH)}, and the corresponding state transition dX = {dx0, . . . , dxH}, a GP is completely defined by a mean function and a covariance function. The joint distribution of the observed output and the output corresponding to a given test statecontrol pair ˜x∗= (x∗, u∗) can be written as p dX dx∗ ∼N 0, h K( ˜X, ˜X) + σnI K( ˜X, ˜x∗) K(˜x∗, ˜X) K(˜x∗, ˜x∗) i . The covariance of this multivariate Gaussian distribution is defined via a kernel matrix K(xi, xj). In particular, in this paper we consider the Gaussian kernel K(xi, xj) = σ2 s exp(−1 2(xi−xj)TW(xi− xj))+σ2 n, with σs, σn, W the hyper-parameters. The kernel function can be interpreted as a similarity measure of random variables. More specifically, if the training pairs ˜Xi and ˜Xj are close to each other in the kernel space, their outputs dxi and dxj are highly correlated. The posterior distribution, which is also a Gaussian, can be obtained by constraining the joint distribution to contain the output dx∗that is consistent with the observations. Assuming independent outputs (no correlation between each output dimension) and given a test input ˜xk = (xk, uk) at time step k, the one-step predictive mean and variance of the state transition are specified as Ef[dxk] = K(˜xk, ˜X)(K( ˜X, ˜X) + σnI)−1dX, (3) VARf[dxk] = K(˜xk, ˜xk) −K(˜xk, ˜X)(K( ˜X, ˜X) + σnI)−1K( ˜X, ˜xk). The state distribution at k = 1 is p(x1) ∼N(µ1, Σ1) where the state mean and variance are µ1 = x0+Ef[dx0], Σ1 = VARf[dx0]. When propagating the GP-based dynamics over a trajectory of time horizon H, the input state-control pair ˜xk becomes uncertain with a Gaussian distribution 2 (initially ˜x0 is deterministic). Here we define the joint distribution over state-control pair at k as p(˜xk) = p(xk, uk) ∼N(˜µk, ˜Σk). Thus the distribution over state transition becomes p(dxk) = R p(f(˜xk)|˜xk)p(˜xk)d˜xk. Generally, this predictive distribution cannot be computed analytically because the nonlinear mapping of an input Gaussian distribution lead to a non-Gaussian predictive distribution. However, the predictive distribution can be approximated by a Gaussian p(dxk) ∼ N(dµk, dΣk) [16]. Thus the state distribution at k + 1 is also a Gaussian N(µk+1, Σk+1) [14] µk+1 = µk + dµk, Σk+1 = Σk + dΣk + COVf,˜xk[xk, dxk] + COVf,˜xk[dxk, xk]. (4) Given an input joint distribution N(˜µk, ˜Σk), we employ the moment matching approach [16][14] to compute the posterior GP. The predictive mean dµk is evaluated as dµk = E˜xk Ef[dxk] = Z Ef[dxk]N ˜µk, ˜Σk d˜xk. Next, we compute the predictive covariance matrix dΣk = " VARf,˜xk [dxk1 ] . . . COVf,˜xk [dxkn , dxk1 ] ... ... ... COVf,˜xk [dxk1 , dxkn ] . . . VARf,˜xk [dxkn ] # , where the variance term on the diagonal for output dimension i is obtained as VARf,˜xk[dxki] = E˜xk VARf[dxki] + E˜xk Ef[dxki]2 −E˜xk Ef[dxki] 2, (5) and the off-diagonal covariance term for output dimension i, j is given by the expression COVf,˜xk[dxki, dxkj] = E˜xk Ef[dxki]Ef[dxkj] −E˜xk[Ef[dxki]]E˜xk[Ef[dxkj]]. (6) The input-output cross-covariance is formulated as COVf,˜xk[˜xk, dxk] = E˜xk ˜xkEf[dxk]T −E˜xk[˜xk]Ef,˜xk[dxk]T. (7) COVf,˜xk[xk, dxk] can be easily obtained as a sub-matrix of (7). The kernel or hyper-parameters Θ = (σn, σs, W) can be learned by maximizing the log-likelihood of the training outputs given the inputs Θ∗= argmax Θ log p dX| ˜X, Θ . (8) This optimization problem can be solved using numerical methods such as conjugate gradient [17]. 2.3 Local dynamics model In DDP-related algorithms, a local model along a nominal trajectory (¯xk, ¯uk), is created based on: i) a first or second-order linear approximation of the dynamics model; ii) a second-order local approximation of the value function. In our proposed PDDP framework, we will create a local model along a trajectory of state distribution-control pair (p(¯xk), ¯uk). In order to incorporate uncertainty explicitly in the local model, we introduce the Gaussian belief augmented state vector zx k = [µk vec(Σk)]T ∈Rn+n×n where vec(Σk) is the vectorization of Σk. Now we create a local linear model of the dynamics. Based on eq.(4), the dynamics model with the augmented state is zx k+1 = F(zx k, uk). (9) Define the control and state variations δzx k = zx k −¯zx k and δuk = uk −¯uk. In this work we consider the first-order expansion of the dynamics. More precisely we have δzx k+1 = Fx k δzx k + Fu k δuk, (10) where the Jacobian matrices Fx k and Fu k are specified as Fx k = ∇xkF = ∂µk+1 ∂µk ∂µk+1 ∂Σk ∂Σk+1 ∂µk Σk+1 ∂Σk ∈R(n+n2)×(n+n2), Fu k = ∇ukF = " ∂µk+1 ∂uk ∂Σk+1 ∂uk # ∈R(n+n2)×m. (11) The partial derivatives ∂µk+1 ∂µk , ∂µk+1 ∂Σk , ∂Σk+1 ∂µk , ∂Σk+1 ∂Σk , ∂µk+1 ∂uk , ∂Σk+1 ∂uk can be computed analytically. Their forms are provided in the supplementary document of this work. For numerical implementation, the dimension of the augmented state can be reduced by eliminating the redundancy of Σk and the principle square root of Σk may be used for numerical robustness [6]. 3 2.4 Cost function In the classical DDP and many optimal control problems, the following quadratic cost function is used L(xk, uk) = (xk −xgoal k )TQ(xk −xgoal k ) + uT k Ruk, (12) where xgoal k is the target state. Given the distribution p(xk) ∼N(µk, Σk), the expectation of original quadratic cost function is formulated as Exk h L(xk, uk) i = tr(QΣk) + (µk −xgoal k )TQ(µk −xgoal k ) + uT k Ruk. (13) In PDDP, we use the cost function L(zx k, uk) = Exk[L(xk, uk)]. The analytic expressions of partial derivatives ∂ ∂zx k L(zx k, uk) and ∂ ∂uk L(zx k, uk) can be easily obtained. The cost function (13) scales linearly with the state covariance, therefore the exploration strategy of PDDP is balanced between the distance from the target and the variance of the state. This strategy fits well with DDP-related frameworks that rely on local approximations of the dynamics. A locally optimal controller obtained from high-risk explorations in uncertain regions might be highly undesirable. 2.5 Control policy The Bellman equation for the value function in discrete-time is specified as follows V (zx k, k) = min uk E " L(zx k, uk) + V F(zx k, uk), k + 1 | {z } Q(zx k,uk) |xk # . (14) We create a quadratic local model of the value function by expanding the Q-function up to the second order Qk(zx k+δzx k, uk+δuk) ≈Q0 k+Qx kδzx k+Qu kδuk+ 1 2 δzx k δuk T Qxx k Qxu k Qux k Quu k δzx k δuk , (15) where the superscripts of the Q-function indicate derivatives. For instance, Qx k = ∇xQk(zx k, uk). For the rest of the paper, we will use this superscript rule for L and V as well. To find the optimal control policy, we compute the local variations in control δˆuk that maximize the Q-function δˆuk = arg max uk h Qk(zx k + δzx k, uk + δuk) i = −(Quu k )−1Qu k | {z } Ik −(Quu k )−1Qux k | {z } Lk δzx k = Ik + Lkδzx k. (16) The optimal control can be found as ˆuk = ¯uk + δˆuk. The quadratic expansion of the value function is backward propagated based on the equations that follow Qx k = Lx k + V x k Fx k , Qu k = Lu k + V x k Fu k , Qxx k = Lxx k + (Fx k )TV xx k Fx k , Qux k = Lux k + (Fu k )TV xx k Fx k , Quu k = Luu k + (Fu k )TV xx k Fu k , Vk−1 = Vk + Qu kIk, V x k−1 = Qx k + Qu kLk, V xx k−1 = Qxx k + Qxu k Lk. (17) The second-order local approximation of the value function is propagated backward in time iteratively. We use the learned controller to generate a locally optimal trajectory by propagating the dynamics forward in time. The control policy is a linear function of the augmented state zx k, therefore the controller is deterministic. The state propagations have been discussed in Sec. 2.2. 2.6 Summary of algorithm The proposed algorithm can be summarized in Algorithm 1. The algorithm consists of 8 modules. In Model learning (Step 1-2) we sample trajectories from the original physical system in order to collect training data and learn a probabilistic model. In Local approximation (Step 4) we obtain a local linear approximation (10) of the learned probabilistic model along a nominal trajectory by computing Jacobian matrices (11). In Controller learning (Step 5) we compute a local optimal control sequence (16) by backward-propagation of the value function (17). To ensure convergence, we 4 employ the line search strategy as in [2]. We compute the control law as δˆuk = αIk + Lkδzx k. Initially α = 1, then decrease it until the expected cost is smaller than the previous one. In Forward propagation (Step 6), we apply the control sequence from last step and obtain a new nominal trajectory for the next iteration. In Convergence condition (Step 7), we set a threshold on the accumulated cost J∗such that when Jπ < J∗, the algorithm is terminated with the optimized state and control trajectory. In Interaction condition (Step 8), when the state covariance Σk exceeds a threshold Σtol, we sample new trajectories from the physical system using the control obtained in step 5, and go back to step 2 to learn a more accurate model. The old GP training data points are removed from the training set to keep its size fixed. Finally in Nominal trajectory update (step 9), the trajectory obtained in Step 6 or 8 becomes the new nominal trajectory for the next iteration. An simple illustration of the algorithm is shown in Fig. 3a. Intuitively, PDDP requires interactions with the physical systems only if the GP model no longer represents the true dynamics around the nominal trajectory. Given: A system with unknown dynamics, target states Goal : An optimized trajectory of state and control 1 Generate N state trajectories by applying random control sequences to the physical system (1); 2 Obtain state and control training pairs from sampled trajectories and optimize the hyper-parameters of GP (8); 3 for i = 1 to Imax do 4 Compute a linear approximation of the dynamics along (¯zx k, ¯uk) (10); 5 Backpropagate in time to get the locally optimal control ˆuk = ¯uk + δˆuk and value function V (zx k, k) according to (16) (17); 6 Forward propagate the dynamics (9) by applying the optimal control ˆuk, obtain a new trajectory (zx k, uk); 7 if Converge then Break the for loop; 8 if Σk > Σtol then apply the optimal control to the original physical system to generate a new nominal trajectory (zx k, uk) and N −1 additional trajectories by applying small variations of the learned controller, update the GP training set and go back to step 2; 9 Set ¯zx k = zx k, ¯uk = uk and i = i + 1, go back to step 4; 10 end 11 Apply the optimized controller to the physical system, obtain the optimized trajectory. Algorithm 1: PDDP algorithm 2.7 Computational complexity Dynamics propagation: The major computational effort is devoted to GP inferences. In particular, the complexity of one-step moment matching (2.2) is O (N)2n2(n+m) [14], which is fixed during the iterative process of PDDP. We found a small number of sampled trajectories (N ≤5) are able to provide good performances for a system of moderate size (6-12 state dimensions). However, for higher dimensional problems, sparse or local approximation of GP (e.g. [11][18][19], etc) may be used to reduce the computational cost of GP dynamics propagation. Controller learning: According to (16), learning policy parameters Ik and Lk requires computing the inverse of Quu k , which has the computational complexity of O(m3), where m is the dimension of control input. As a local trajectory optimization method, PDDP offers comparable scalability to the classical DDP. 2.8 Relation to existing works Here we summarize the novel features of PDDP in comparison with some notable DDP-related frameworks for stochastic systems (see also Table 1). First, PDDP shares some similarities with the belief space iLQG [6] framework, which approximates the belief dynamics using an extended Kalman filter. Belief space iLQG assumes a dynamics model is given and the stochasticity comes from the process noises. PDDP, however, is a data-driven approach that learns the dynamics models and controls from sampled data, and it takes into account model uncertainties by using GPs. Second, PDDP is also comparable with iLQG-LD [5], which applies Locally Weighted Projection Regression (LWPR) to represent the dynamics. iLQG-LD does not incorporate model uncertainty therefore requires a large amount of data to learn an accurate model. Third, PDDP does not suffer from the 5 high computational cost of finite differences used to numerically compute the first-order expansions [2][6] and second-order expansions [4] of the underlying stochastic dynamics. PDDP computes Jacobian matrices analytically (11). PDDP Belief space iLQG[6] iLQG-LD[5] iLQG[2]/sDDP[4] State µk, Σk µk, Σk xk xk Dynamics model Unknown Known Unknown Known Linearization Analytic Jacobian Finite differences Analytic Jacobian Finite differences Table 1: Comparison with DDP-related frameworks 3 Experimental Evaluation We evaluate the PDDP framework using two nontrivial simulated examples: i) cart-double inverted pendulum swing-up; ii) six-link robotic arm reaching. We also compare the learning efficiency of PDDP with the classical DDP [1] and PILCO [13][14]. All experiments were performed in MATLAB. 3.1 Cart-double inverted pendulum swing-up Cart-Double Inverted Pendulum (CDIP) swing-up is a challenging control problem because the system is highly underactuated with 3 degrees of freedom and only 1 control input. The system has 6 state-dimensions (cart position/velocity, link 1,2 angles and angular velocities). The swing-up problem is to find a sequence of control input to force both pendulums from initial position (π,π) to the inverted position (2π,2π). The balancing task requires the velocity of the cart, angular velocities of both pendulums to be zero. We sample 4 initial trajectories with time horizon H = 50. The CDIP swing-up problem has been solved by two controllers for swing-up and balancing, respectively [20]. PILCO [14] is one of the few RL methods that is able to complete this task without knowing the dynamics. The results are shown in Fig.1. 0 5 10 15 20 25 30 35 40 45 50 −4 −2 0 2 4 6 8 10 12 CDIP state trajectories Time steps Cart position Cart velocity Link1 angular velocity Link2 angular velocity Link1 angle Link2 angle (a) 0 5 10 15 20 25 30 35 40 45 50 0 0.2 0.4 0.6 0.8 1 Time steps CDIP cost PDDP DDP PILCO (b) Figure 1: Results for the CDIP task. (a) Optimized state trajectories of PDDP. Solid lines indicate means, errorbars indicate variances. (b) Cost comparison of PDDP, DDP and PILCO. Costs (eq. 13) were computed based on sampled trajectories by applying the final controllers. 3.2 Six-link robotic arm The six-link robotic arm model consist of six links of equal length and mass, connected in an open chain with revolute joints. The system has 6 degrees of freedom, and 12 state dimensions (angle and angular velocity for each joint). The goal for the first 3 joints is to move to the target angle π 4 and for the rest 3 joints to −π 4 . The desired velocities for all 6 joints are zeros. We sample 2 initial trajectories with time horizon H = 50. The results are shown in Fig. 2. 3.3 Comparative analysis DDP: Originally introduced in the 70’s, the classical DDP [1] is still one of the most effective and efficient trajectory optimization approaches. The major differences between DDP and PDDP can 6 5 10 15 20 25 30 35 40 45 50 −1 0 1 Angle 5 10 15 20 25 30 35 40 45 50 −1 0 1 Angular velocity Time steps (a) 0 5 10 15 20 25 30 35 40 45 50 0 0.5 1 1.5 2 2.5 3 Time steps 6−link arm Cost PDDP DDP PILCO (b) Figure 2: Results for the 6-link arm task. (a) Optimized state trajectories of PDDP. Solid lines indicate means, errorbars indicate variances. (b) Cost comparison of PDDP, DDP and PILCO. Costs (eq. 13) were computed based on sampled trajectories by applying the final controllers. be summarized as follow: firstly, DDP relies on a given accurate dynamics model, while PDDP is a data-driven framework that learns a locally accurate model by forward sampling; secondly, DDP does not deal with model uncertainty, PDDP takes into account model uncertainty using GPs and perform local dynamic programming in Gaussian belief spaces; thirdly, generally in applications of DDP linearizations are performed using finite differences while in PDDP Jacobian matrices are computed analytically (11). PILCO: The recently proposed PILCO [14] framework has demonstrated state-of-the-art learning efficiency compared with other methods such as [21][22]. The proposed PDDP is different from PILCO in several ways. Firstly, based on local linear approximation of dynamics and quadratic approximation of the value function, PDDP finds linear, time-varying feedforward and feedback policy, PILCO requires an a priori policy parameterization and an extra optimization solver. Secondly, PDDP keeps a fixed size of training data for GP inferences, while PILCO adds new data to the training set after each trial (recently, the authors applied sparse GP approximation [19] in an improved version of PILCO when the data size reached a threshold). Thirdly, by using the Gaussian belief augmented state and cost function (13), PDDP’s exploration scheme is balanced between the distance from the target and the variance of the state. PILCO employs a saturating cost function which leads to automatic explorations in the high-variance regions in the early stages of learning. In both tasks, PDDP, DDP and PILCO bring the system to the desired states. The resulting trajectories for PDDP are shown in Fig.1a and 2a. The reason for low variances of some optimized trajectories is that during final stage of learning, interactions with the physical systems (forward samplings using the locally optimal controller) would reduce the variances significantly. The costs are shown in Fig. 1b and 2b. For both tasks, PDDP and DDP performs similarly and slightly different from PILCO in terms of cost reduction. The major reasons for this difference are: i) different cost functions used by these methods; ii) we did not impose any convergence condition for the optimized trajectories on PILCO. We now compare PDDP with DDP and PILCO in terms of data-efficiency and controller learning speed. Data-efficiency: As shown in Fig.4a, in both tasks, PDDP performs slightly worse than PILCO in terms of data-efficiency based on the number of interactions required with the physical systems. For the systems used for testing, PDDP requires around 15% −25% more interactions than PILCO. The number of interactions indicates the amount of sampled trajectories required from the physical system. At each trial we sample N trajectories from the physical systems (algorithm 1). Possible reasons for the slightly worse performances are: i) PDDP’s policy is linear which is restrictive, while PILCO yields nonlinear policy parameterizations; ii) PDDP’s exploration scheme is more conservative than PILCO in the early stages of learning. We believe PILCO is the most data-efficient framework for these tasks. However, PDDP is able to offer close performances thanks to the probabilistic representation of the dynamics as well as the use of Gaussian belief augmented state. Learning speed: In terms of total computational time required to obtain the final controller, PDDP outperforms PILCO significantly as shown in Fig.4b. For the 6 and 12 dimensional systems used for testing, PILCO requires an iterative method (e.g.,CG or BFGS) to solve high dimensional optimization problems (depending on the policy parameterization), while PDDP computes local optimal controls (16) without an extra optimizer. In terms of computational time per iteration, as shown in 7 Fig.3b, PDDP is slower than the classical DDP due to the high computational cost of GP dynamics propagations. However, for DDP, the time dedicated to linearizing the dynamics model is around 70% −90% of the total time per iteration for the two tasks considered in this work. PDDP avoids the high computational cost of finite differences by evaluating all Jacobian matrices analytically, the time dedicated to linearization is less than 10% of the total time per iteration. Control policy GP dynamics Local Model
Cost function Physical system (a) DDP PDDP 0 2 4 6 8 10 12 14 16 Time per iteration (sec) for CDIP DDP PDDP 0 10 20 30 40 50 Time per iteration (sec) for 6−link arm Dynamics linearization Forward/backward pass Dyanmics linearization Forward/backward pass (b) Figure 3: (a) An intuitive illustration of the PDDP framework. (b) Comparison of PDDP and DDP in terms of the computational time per iteration (in seconds) for the CDIP (left subfigure) and 6-link arm (right subfigure) tasks. Green indicates time for performing linearization, cyan indicates time for forward and backward sweeps (Sec. 2.6). CDIP 6−Link arm 0 5 10 15 20 25 30 35 Number of interactions PDDP PILCO (a) CDIP 6−Link arm 0 500 1000 1500 Total time (minutes) PDDP PILCO (b) Figure 4: Comparison of PDDP and PILCO in terms of data-efficiency and controller learning speed. (a) Number of interactions with the physical systems required to obtain the final results in Fig. 1 and 2. (b) Total computational time (in minutes) consumed to obtain the final controllers. 4 Conclusions In this work we have introduced a probabilistic model-based control and trajectory optimization method for systems with unknown dynamics based on Differential Dynamic Programming (DDP) and Gaussian processes (GPs), called Probabilistic Differential Dynamic Programming (PDDP). PDDP takes model uncertainty into account explicitly by representing the dynamics using GPs and performing local Dynamic Programming in Gaussian belief spaces. Based on the quadratic approximation of the value function, PDDP yields a linear, locally optimal control policy and features a more efficient control improvement scheme compared with typical gradient-based policy search methods. Thanks to the probabilistic representation of the dynamics, PDDP offers reasonable data-efficiency comparable to a state of the art GP-based policy search method [14]. In general, local trajectory optimization is a powerful approach to challenging control and RL problems. Due to its model-based nature, model inaccuracy has always been the major obstacle for advanced applications. Grounded on the solid developments of classical trajectory optimization and Bayesian machine learning, the proposed PDDP has demonstrated encouraging performance and potential for many applications. Acknowledgments This work was partially supported by a National Science Foundation grant NRI-1426945. 8 References [1] D. Jacobson and D. Mayne. Differential dynamic programming. 1970. [2] E. Todorov and W. Li. A generalized iterative lqg method for locally-optimal feedback control of constrained nonlinear stochastic systems. In American Control Conference, pages 300–306, June 2005. [3] Y. Tassa, T. Erez, and W. D. Smart. Receding horizon differential dynamic programming. In NIPS, pages 1465–1472. [4] E. Theodorou, Y. Tassa, and E. Todorov. Stochastic differential dynamic programming. In American Control Conference, pages 1125–1132, June 2010. [5] D. Mitrovic, S. Klanke, and S. Vijayakumar. Adaptive optimal feedback control with learned internal dynamics models. In From Motor Learning to Interaction Learning in Robots, pages 65–84. Springer, 2010. [6] J. Van Den Berg, S. Patil, and R. Alterovitz. Motion planning under uncertainty using iterative local optimization in belief space. The International Journal of Robotics Research, 31(11):1263–1278, 2012. [7] S. Levine and V. Koltun. Variational policy search via trajectory optimization. In NIPS, pages 207–215. 2013. [8] J. Morimoto and C.G. Atkeson. Minimax differential dynamic programming: An application to robust biped walking. In NIPS, pages 1539–1546, 2002. [9] P. Abbeel, A. Coates, M. Quigley, and A. Y. Ng. An application of reinforcement learning to aerobatic helicopter flight. In NIPS, pages 1–8, 2007. [10] C. E. Rasmussen and M. Kuss. Gaussian processes in reinforcement learning. In NIPS, pages 751–759, 2003. [11] D. Nguyen-Tuong, J. Peters, and M. Seeger. Local gaussian process regression for real time online model learning. In NIPS, pages 1193–1200, 2008. [12] M. P. Deisenroth, C. E. Rasmussen, and J. Peters. Gaussian process dynamic programming. Neurocomputing, 72(7):1508–1524, 2009. [13] M. P. Deisenroth and C. E. Rasmussen. Pilco: A model-based and data-efficient approach to policy search. In ICML, pages 465–472, 2011. [14] M. P. Deisenroth, D. Fox, and C. E. Rasmussen. Gaussian processes for data-efficient learning in robotics and control. IEEE Transsactions on Pattern Analysis and Machine Intelligence, 27:75–90, 2014. [15] P. Hemakumara and S. Sukkarieh. Learning uav stability and control derivatives using gaussian processes. IEEE Transactions on Robotics, 29:813–824, 2013. [16] J. Quinonero Candela, A. Girard, J. Larsen, and C. E. Rasmussen. Propagation of uncertainty in bayesian kernel models-application to multiple-step ahead forecasting. In IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. [17] C.K.I Williams and C.E. Rasmussen. Gaussian processes for machine learning. MIT Press, 2006. [18] L. Csat´o and M. Opper. Sparse on-line gaussian processes. Neural Computation, 14(3):641– 668, 2002. [19] E. Snelson and Z. Ghahramani. Sparse gaussian processes using pseudo-inputs. In NIPS, pages 1257–1264, 2005. [20] W. Zhong and H. Rock. Energy and passivity based control of the double inverted pendulum on a cart. In International Conference on Control Applications, pages 896–901, Sept 2001. [21] T. Raiko and M. Tornio. Variational bayesian learning of nonlinear hidden state-space models for model predictive control. Neurocomputing, 72(16):3704–3712, 2009. [22] H. van Hasselt. Insights in reinforcement learning. Hado van Hasselt, 2011. 9
|
2014
|
198
|
5,289
|
Improved Multimodal Deep Learning with Variation of Information Kihyuk Sohn, Wenling Shang and Honglak Lee University of Michigan Ann Arbor, MI, USA {kihyuks,shangw,honglak}@umich.edu Abstract Deep learning has been successfully applied to multimodal representation learning problems, with a common strategy to learning joint representations that are shared across multiple modalities on top of layers of modality-specific networks. Nonetheless, there still remains a question how to learn a good association between data modalities; in particular, a good generative model of multimodal data should be able to reason about missing data modality given the rest of data modalities. In this paper, we propose a novel multimodal representation learning framework that explicitly aims this goal. Rather than learning with maximum likelihood, we train the model to minimize the variation of information. We provide a theoretical insight why the proposed learning objective is sufficient to estimate the data-generating joint distribution of multimodal data. We apply our method to restricted Boltzmann machines and introduce learning methods based on contrastive divergence and multi-prediction training. In addition, we extend to deep networks with recurrent encoding structure to finetune the whole network. In experiments, we demonstrate the state-of-the-art visual recognition performance on MIR-Flickr database and PASCAL VOC 2007 database with and without text features. 1 Introduction Different types of multiple data modalities can be used to describe the same event. For example, images, which are often represented with pixels or image descriptors, can also be described with accompanying text (e.g., user tags or subtitles) or audio data (e.g., human voice or natural sound). There have been several applications of multimodal learning from multiple domains such as emotion [13] and speech [10] recognition with audio-visual data, robotics applications with visual and depth data [15, 17, 32, 23], or medical applications with visual and temporal data [26]. These data from multiple sources are semantically correlated, and sometimes provide complementary information to each other. In order to exchange such information, it is important to capture a high-level association between data modalities with a compact set of latent variables. However, learning associations between multiple heterogeneous data distributions is a challenging problem. A naive approach is to concatenate the data descriptors from different sources of input to construct a single high-dimensional feature vector and use it to solve a unimodal representation learning problem. Unfortunately, this approach has been unsuccessful since the correlation between features in each data modality is much stronger than that between data modalities [21]. As a result, the learning algorithms are easily tempted to learn dominant patterns in each data modality separately while giving up learning patterns that occur simultaneously in multiple data modalities. To resolve this issue, deep learning methods, such as deep autoencoders [9] or deep Boltzmann machines (DBM) [24], have been used to this problem [21, 27], with a common strategy to learning joint representations that are shared across multiple modalities at the higher layer of the deep network after learning layers of modality-specific networks. The rationale is that the learned features may have less within-modality correlation than raw features, and this makes it easier to capture patterns across data modalities. Despite the promise, there still remains a challenging question how to learn a good association between multiple data modalities that can effectively deal with missing data modalities in the testing time. One necessary condition of being a good generative model of multimodal data is to have an ability to predict or reason about missing data modalities given partial observation. To this end, we propose 1 a novel multimodal representation learning framework that explicitly aims this goal. The key idea is to minimize the information distance between data modalities through the shared latent representations. More concretely, we train the model to minimize the variation of information (VI), an information theoretic measure that computes the distance between random variables, i.e., multiple data modalities. Note that this is in contrast to the previous approaches on multimodal deep learning, which are based on maximum (joint) likelihood (ML) learning [21, 27]. We provide an intuition how our method could be more effective in learning the joint representation of multimodal data than ML learning, and show theoretical insights why the proposed learning objective is sufficient to estimate the data-generating joint distribution of multimodal data. We apply the proposed framework to multimodal restricted Boltzmann machine (MRBM). We introduce two learning algorithms, based on contrastive divergence [19] and multi-prediction training [6]. Finally, we extend to multimodal deep recurrent neural network (MDRNN) for unsupervised finetuning of whole network. In experiments, we demonstrate the state-of-the-art visual recognition performance on MIR-Flickr database and PASCAL VOC 2007 database with and without text features. 2 Multimodal Learning with Variation of Information In this section, we propose a novel training objective based on the VI. We make a comparison to the ML objective, a typical learning objective for training models of multimodal data, to give an insight how our proposal outperforms the baseline. Finally, we establish a theorem showing that the proposed learning objective is sufficient to obtain a good generative model that fully recovers the joint data-generating distribution of multimodal data. Notation. We use uppercase letters X, Y to denote random variables, lowercase letters x, y for realizations. Let PD be the data-generating distribution and Pθ the model distribution parameterized by θ. For presentation clarity, we slightly abuse the notation for Q to denote conditional (Q(x|y), Q(y|x)), marginal (Q(x), Q(y)), as well as joint distributions (Q(x, y)) that are derived from the joint distribution Q(x, y). The type of distribution for Q should be clear from the context. 2.1 Minimum Variation of Information Learning Motivated from the necessary condition of good generative models to reason about the missing data modality, it seems natural to learn to maximize the amount of information that one data modality has about the others. We quantify such an amount of information between data modalities using variation of information (VI). The VI is an information theoretic measure that computes the information distance between two random variables (e.g., data modalities), and is written as follows:1 VIQ(X, Y ) = −EQ(X,Y ) log Q(X|Y ) + log Q(Y |X) (1) where Q(X, Y ) = Pθ(X, Y ) is any joint distribution on random variables (X, Y ) parametrized by θ. Informally, VI is small when the conditional LLs Q(X|Y ) and Q(Y |X) are “peaked”, meaning that X has low entropy conditioned on Y and vice versa. Following the intuition, we define new multimodal learning criteria, a minimum variation of information (MinVI) learning, as follows: MinVI: minθ LVI(θ), LVI(θ) = −EPD(X,Y ) log Pθ(X|Y ) + log Pθ(Y |X) (2) Note the difference in LVI(θ) that we take the expectation over PD in LVI(θ). Furthermore, we observe that the MinVI objective can be decomposed into a sum of two negative conditional LLs. This indeed well aligns with our initial motivation about reasoning missing data modality. In the following, we provide a more insight of our MinVI objective in relation to the ML objective, which is a standard learning objective in generative models. 2.2 Relation to Maximum Likelihood Learning The ML objective function can be written as a minimization of the negative LL (NLL) as follows: ML: minθ LNLL(θ), LNLL(θ) = −EPD(X,Y ) log Pθ(X, Y ) , (3) and we can show that the NLL objective function is reformulated as follows: 2LNLL(θ) = KL (PD(X)∥Pθ(X)) + KL (PD(Y )∥Pθ(Y )) | {z } (a) + EPD(X) KL (PD(Y |X)∥Pθ(Y |X)) + EPD(Y ) KL (PD(X|Y )∥Pθ(X|Y )) | {z } (b) + C, (4) 1In practice, we use finite samples of the training data and use a regularizer (e.g., l2 regularizer) to avoid overfitting to the finite sample distribution. 2 where C is a constant which is irrelevant to θ. Note that (b) is equivalent to LVI(θ) in Equation (2) up to a constant. We provide a full derivation of Equation (4) in supplementary material. Ignoring the constant, the NLL objective is composed of four terms of KL divergence. Since KL divergence is non-negative and is 0 only when two distributions match, the ML learning in Equation (3) can be viewed as a distribution matching problem involving (a) marginal likelihoods and (b) conditional likelihoods. Here, we argue that (a) is more difficult to optimize than (b) because there are often too many modes in the marginal distribution. Compared to that, the number of modes can be dramatically reduced in the conditional distribution since the conditioning variables may restrict the support of random variable effectively. Therefore, (a) may become a dominant factor to be minimized during the optimization process and as a trade-off, (b) will be easily compromised, which makes it difficult to learn a good association between data modalities. On the other hand, the MinVI objective focuses on modelling the conditional distributions (Equation (4)), which is arguably easier to optimize. Indeed, similar argument has been made for generalized denoising autoencoders (DAEs) [1] and generative stochastic networks (GSNs) [2], which focus on learning the transition operators (e.g., Pθ(X| ˜X), where ˜X is a corrupted version of data X, or Pθ(X|H), where H can be arbitrary latent variables) to bypass an intractable problem of learning density model Pθ(X). 2.3 Theoretical Results Bengio et al. [1, 2] proved that learning transition operators of DAEs or GSNs is sufficient to learn a good generative model that estimates a data-generating distribution. Under similar assumptions, we establish a theoretical result that we can obtain a good density estimator for joint distribution of multimodal data by learning the transition operators derived from the conditional distributions of one data modality given the other. In multimodal learning framework, the transition operators T X n and T Y n with model distribution Pθn(X, Y ) are defined for Markov chains of data modalities X and Y , respectively. Specifically, T X n (x[t]|x[t −1]) = P y∈Y Pθn (x[t]|y) Pθn (y|x[t −1]) and T Y n is defined in a similar way. Now, we formalize the theorem as follows: Theorem 2.1 For finite state space X, Y, if, ∀x ∈X, ∀y ∈Y, Pθn(·|y) and Pθn(·|x) converges in probability to PD(·|y) and PD(·|x), respectively, and T X n and T Y n are ergodic Markov chains, then, as the number of examples n →∞, the asymptotic distribution πn(X) and πn(Y ) converge to datagenerating marginal distributions PD(X) and PD(Y ), respectively. Moreover, the joint probability distribution Pθn (x, y) converges to PD (x, y) in probability. The proof is provided in supplementary material. The theorem ensures that the MinVI objective can lead to a good generative model estimating the joint data-generating distribution of multimodal data. The theorem holds under two assumptions, consistency of density estimators and ergodicity of transition operators. The ergodicity of transition operators are satisfied for wide variety of neural networks, such as an RBM or DBM. 2 The consistency assumption is more difficult to satisfy and the aforementioned deep energy-based models nor RNN may not satisfy the condition due to the approximated posteriors using factorized distribution. Probably, deep networks that allow exact posterior inference, such as stochastic feedforward neural networks [20, 29], could be a better model in our multimodal learning framework, but we leave this as a future work. 3 Application to Multimodal Deep Learning In this section, we describe the MinVI learning in multimodal deep learning framework. To overview our pipeline, we use the commonly used network architecture that consists of layers of modalityspecific deep networks followed by a layer of neural network that jointly models the multiple modalities [21, 27]. The network is trained in two steps: In layer-wise pretraining, each layer of modalityspecific deep network is trained using restricted Boltzmann machines (RBMs). For the top-layer shared network, we train MRBM with MinVI objective (Section 3.2). Then, we finetune the whole deep network by constructing multimodal deep recurrent neural network (MDRNN) (Section 3.3). 3.1 Restricted Boltzmann Machines for Multimodal Learning The restricted Boltzmann machine (RBM) is an undirected graphical model that defines the distribution of visible units using hidden units. For multimodal input, we define the joint distribution of 2For energy-based models like RBM and DBM, it is straightforward to see that every state has non-zero probability and can be reached from any other state. However, the mixing of the chain might be slow in practice. 3 multimodal RBM (MRBM) [21, 27] as P(x, y, h) = 1 Z exp −E(x, y, h) with the energy function: E(x, y, h) = − Nx X i=1 K X k=1 xiW x ikhk − Ny X j=1 K X k=1 yjW y jkhk − K X k=1 bkhk − Nx X i=1 cx i xi − Ny X j=1 cy j yj, (5) where Z is the normalizing constant, x ∈{0, 1}Nx, y ∈{0, 1}Ny are the binary visible (i.e., observation) variables of multimodal input, and h ∈{0, 1}K are the binary hidden (i.e., latent) variables. W x ∈RNx×K defines the weights between x and h, and W y ∈RNy×K defines the weights between y and h. cx ∈RNx, cy ∈RNy, and b ∈RK are bias vectors corresponding to x, y, and h, respectively. Note that the MRBM is equivalent to an RBM whose visible variables are constructed by concatenating the visible variables of multiple input modalities, i.e., v = [x ; y]. Due to bipartite structure, variables in the same layer are conditionally independent given the variables of the other layer, and the conditional probabilities are written as follows: P(hk = 1 | x, y) = σ X i W x ikxi + X j W y jkyj + bk , (6) P(xi = 1 | h) = σ X k W x ikhk + cx i , P(yj = 1 | h) = σ X k W y jkhk + cy j , (7) where σ(x) = 1 1+exp(−x). Similarly to the standard RBM, the MRBM can be trained to maximize the joint LL (log P(x, y)) using stochastic gradient descent (SGD) while approximating the gradient with contrastive divergence (CD) [8] or persistent CD (PCD) [30]. In our case, however, we train the MRBM in MinVI criteria. We will discuss the inference and training algorithms in Section 3.2. When we have access to all data modalities, we can use Equation (6) for exact posterior inference. On the other hand, when some of the input modalities are missing, the inference is intractable, and we resort to the variational method. For example, when we are given x but no y, the true posterior can be approximated with a fully factorized distribution Q(y, h) = Q j Q k Q(yj)Q(hk) by minimizing the KL Q(y, h)∥Pθ(y, h|x) . This leads to the following fixed-point equations: ˆhk = σ X i W x ikxi + X j W y jkˆyj + bk , ˆyj = σ X k W y jkˆhk + cy j , (8) where ˆhk = Q(hk) and ˆyj = Q(yj). The variational inference proceeds by alternately updating the mean-field parameters ˆh and ˆy that are initialized with all 0’s. 3.2 Training Algorithms CD-PercLoss. As in Equation (2), the objective function can be decomposed into two conditional LLs, and the MRBM with MinVI objective can be trained equivalently by training the two conditional RBMs (CRBMs) while sharing the weights. Since the objective functions are the sum of two conditional LLs, we compute the (approximate) gradient of each CRBM separately using CDPercLoss [19] and accumulate them to update parameters.3 Multi-Prediction. We found a few practical issues of CD-PercLoss training: First, the gradient estimates are inaccurate. Second, there exists a difference between encoding process of training and testing, especially when the unimodal query (e.g., one of the data modality is missing) is considered for testing. As an alternative objective, we propose multi-prediction (MP) training of MRBM in MinVI criteria. The MP training was originally proposed to train deep Boltzmann machines (DBMs) [6] as an alternative to the stochastic approximation procedure learning [24]. The idea is to train the model good at predicting any subset of input variables given the rest of them by constructing the recurrent network with encoding function derived from the variational inference problem. The MP training can be adapted to train MRBM with MinVI objective with some modifications. For example, the CRBM with an objective log P(y|x) can be trained by randomly selecting the subset of variables to be predicted only from the target modality y, but the conditioning modality x 3In CD-PercLoss learning, we run separate Gibbs chains for different conditioning variables and select the negative particles with the lowest free energy among sampled particles. We refer [19] for further details. 4 h(3) hx (1) hx (2) hy (1) hy (2) x=hx (0) y=hy (0) Wx (1) Wx (2) Wx (3) Wy (3) Wy (2) Wy (1) Figure 1: An instance of MDRNN with target y given x. Multiple iterations of bottom-up updates (y →h(3); Equation (11)) and top-down updates (h(3) →y; Equation (13)) are performed. The arrow indicates encoding direction. is assumed to be given in all cases. Specifically, given an arbitrary subset s ⊂{1, · · · , Ny} drawn from the independent Bernoulli distribution PS, the MP algorithm predicts ys = {yj : j ∈s} given x and y\s = {yj : j /∈s} through the iterative encoding function derived from fixed-point equations ˆhk = σ X i W x ikxi + X j∈s W y jkˆyj + X j /∈s W y jkyj + bk , ˆyj = σ X k W y jkˆhk + cy j , j ∈s, (9) which is a solution to the variational inference problem minQ KL Q(ys, h)∥Pθ(ys, h|x, y\s) with factorized distribution Q(ys, h) = Q j∈s Q k Q(yj)Q(hk). Note that Equation (9) is similar to the Equation (8) except that only yj, j ∈s are updated. Using an iterative encoding function, the network parameters are trained using SGD while computing the gradient by backpropagating the error between the prediction and the ground truth of ys through the derived recurrent network. The MP formulation (e.g., encoding function) of the CRBM with log P(x|y) can be derived similarly, and the gradients are simply the addition of two gradients that are computed individually. We have two additional hyper parameters, the number of mean-field updates and the sampling ratio of a subset s to be predicted from the target data modality. In our experiments, it was sufficient to use 10 ∼20 iterations until convergence. We used the sampling ratio of 1 (i.e., all the variables in the target data modality are to be predicted) since we are already conditioned on one data modality, which is sufficient to make a good prediction of variables in the target data modality. 3.3 Finetuning Multimodal Deep Network with Recurrent Neural Network Motivated from the MP training of MRBM, we propose multimodal deep recurrent neural network (MDRNN) that tries to predict the target modality given the input modality through the recurrent encoding function, which iteratively performs a full pass of bottom-up and top-down encoding from bottom-layer visible variables to top-layer joint representation back to bottom-layer through the modality-specific deep networks. We show an instance of L = 3 layer MDRNN in Figure 1, and the encoding functions are written as follows:4 x →h(L−1) x : h(l) x = σ W x,(l)⊤h(l−1) x + bx,(l) , l = 1, · · · , L −1 (10) y →h(L−1) y : h(l) y = σ W y,(l)⊤h(l−1) y + by,(l) , l = 1, · · · , L −1 (11) h(L−1) x , h(L−1) y →h(L) : h(L) = σ W x,(L)⊤h(L−1) x + W y,(L)⊤h(L−1) y + b(L) (12) h(L) →y : h(l−1) y = σ W y,(l)h(l) y + by,(l−1) , l = L, · · · , 1 (13) where h(0) x = x and h(0) y = y. The visible variables of the target modality are initialized with 0’s. In other words, in the initial bottom-up update, we compute h(L) only from x while setting y = 0 using Equation (10),(11),(12). Then, we run multiple iterations of top-down (Equation (13)) and bottom-up updates (Equation (11), (12)). Finally, we compute the gradient by backpropagating the reconstruction error of target modality through the network. 4There could be different ways of constructing MDRNN; for instance, one can construct the RNN with DBM-style mean-field updates. In our empirical evaluation, however, running full pass of bottom-up and topdown updates performed the best, and DBM-style updates didn’t give competitive results. 5 Ground Truth Query ML (PCD) MinVI (CDPercLoss) MinVI (MP) Figure 2: Visualization of samples with inferred missing modality. From top to bottom, we visualize ground truth, left or right halves of digits, generated samples with inferred missing modality using MRBM with ML objective, MinVI objective using CD-PercLoss and MP training methods. Input modalities at test time Left+Right Left Right ML (PCD) 1.57% 14.98% 18.88% MinVI (CD-PercLoss) 1.71% 9.42% 11.02% MinVI (MP) 1.73% 6.58% 7.27% Table 1: Test set handwritten digit recognition errors of MRBMs trained with different objectives and learning algorithms. Linear SVM was used for classification with joint feature representations. 4 Experiments 4.1 Toy Example on MNIST In our first experiment, we evaluate the proposed learning algorithm on the MNIST handwritten digit recognition dataset [16]. We consider left and right halves of the digit images as two input modalities and report the recognition performance with different combinations of input modalities at the test time, such as full (left + right) or missing (left or right) data modalities. We compare the performance of the MRBM trained with 1) ML objective using PCD [30], or MinVI objectives with 2) CD-PercLoss or 3) MP training. The recognition errors are provided in Table 1. Compared to ML training, the recognition errors for unimodal queries are reduced by more than a half with MP training of MinVI objective. For multimodal queries, the model trained with ML objective performed the best, although the performance gain was incremental. CD-PercLoss training of MinVI objective also showed significant improvement over ML training, but the errors were not as low as those obtained with MP training. We believe that, although it is an approximation of MinVI objective, the exact gradient for MP algorithm makes learning more efficient than CD-PercLoss. For the rest of the paper, we focus on MP training method. In Figure 2, we visualize the generated samples conditioned on one input modality (e.g., left or right halves of digits). There are many samples generated by the models with MinVI objective that look clearly better than those generated by the model with ML objective. 4.2 MIR-Flickr Database In this section, we evaluate our methods on MIR-Flickr database [11], which is composed of 1 million examples of image and their user tags collected from the social photo-sharing website Flickr.5 Among those, 25000 examples are annotated with 24 potential topics and 14 regular topics, which leads to 38 classes in total with distributed class membership. The topics include object categories such as dog, flower, and people, or scenic concepts such as sky, sea, and night. We used the same visual and text features as in [27].6 Specifically, the image feature is 3857 dimensional vector composed of Pyramid Histogram of Words (PHOW) features [3], GIST [22], and MPEG-7 descriptors [18]. We preprocessed the image features to have zero mean and unit variance for each dimension across all examples. The text feature is a word count vector of 2000 most frequent tags. The number of tags varies from 0 to 72, with 5.15 tags per example in average. Following the experimental protocol [12, 27], we randomly split the labeled data into 15000 for training and 10000 for testing, and used 5000 from training set for validation. We iterate the procedure for 5 times and report the mean average precision (mAP) over 38 classes. Model Architecture. As used in [27], the network is composed of [3857, 1024, 1024] variables for visual pathway, [2000, 1024, 1024] variables for text pathway, and 2048 variables for top-layer MRBM. As described in Section 3, we pretrain the modality-specific deep networks in a greedy 5http://www.flickr.com 6http://www.cs.toronto.edu/˜nitish/multimodal/index.html 6 layerwise manner, and finetune the whole network by initializing MDRNN with the pretrained network. Specifically, we used gaussian RBM for the bottom layer of visual pathway and binary RBM for text pathway.7 The intermediate layers are trained with binary RBMs, and the top-layer MRBM is trained using MP training algorithm. For the layer-wise pretraining of RBMs, we used PCD [30] to approximate gradient. Since our algorithm requires both data modalities during the training, we excluded examples with too sparse or no tags from unlabeled dataset and used about 750K examples with at least 2 tags. After unsupervised training, we extract joint feature representations of the labeled training data and use them to train multiclass logistic regression classifiers. Model Multimodal query Autoencoder 0.610 Multimodal DBM [27] 0.609 Multimodal DBM† [28] 0.641 MK-SVM [7] 0.623 TagProp [31] 0.640 MDRNN 0.686 ± 0.003 Model Unimodal query Autoencoder 0.495 Multimodal DBM [27] 0.531 MK-SVM [7] 0.530 MDRNN 0.607 ± 0.005 Table 2: Test set mAPs on MIR-Flickr database. We implemented autoencoder following the description in [21]. Multimodal DBM† is supervised finetuned model. See [28] for details. Recognition Tasks. For recognition tasks, we train multiclass logistic regression classifiers using joint representations as input features. Depending on the availability of data modalities at testing time, we evaluate the performance using multimodal queries (i.e., both visual and text data are available) and unimodal queries (i.e., visual data is available while the text data is missing). The summary results are in Table 2. We report the test set mAPs of our proposed model and compared to other methods. The proposed MDRNN outperformed the previous state-of-the-art in multimodal queries by 4.5% in mAP. The performance improvement becomes more significant for unimodal queries, achieving 7.6% improvement in mAP over the best published result. As we used the same input features in [27], the results suggest that our proposed algorithm learns better representations shared across multiple modalities. To take a closer look into our model, we performed additional control experiment. In particular, we explore the benefit of recurrent encoding network structure of MDRNN. We compare the performance of the models with different number of mean-field iterations.8 We report the validation set mAPs of models with different number of iterations (0 ∼10) in Table 3. For multimodal query, the MDRNN with 10 iterations improves the recognition performance by only 0.8% compared to the model with 0 iterations. However, the improvement becomes significant for unimodal query, achieving 5.0% performance gain. In addition, we note that the largest improvement was made when we have at least one iteration (from 0 to 1 iteration, 3.4% gain; from 1 to 10 iteration, 1.6% gain). This suggests that the most crucial factor of improvement comes from the inference with reconstructed missing data modality (e.g., text features), and the quality of inferred missing modality improves as we increase the number of iterations. # iterations 0 1 2 3 5 10 Multimodal query 0.677 0.678 0.679 0.680 0.682 0.685 Unimodal query 0.557 0.591 0.599 0.602 0.605 0.607 Table 3: Validation set mAPs on MIR-Flickr database with different number of mean-field iterations. Retrieval Tasks. We perform retrieval tasks using multimodal and unimodal input queries. Following the experimental setting in [27], we select 5000 image-text pairs from the test set to form a database and use 1000 disjoint set of examples from the test set as queries. For each query example, we compute the relevance score to the data points as a cosine similarity of joint representations. The binary relevance label between query and the data points are determined 1 if any of the 38 class labels are overlapped. Our proposed model achieves 0.633 mAP with multimodal query and 0.638 mAP with unimodal query. This significantly outperforms the performance of multimodal DBM [27], which reported 0.622 mAP with multimodal query and 0.614 mAP with unimodal query. 7We assume text features as binary, which is different from [27] where they modeled using replicatedsoftmax RBM [25]. The rationale is that the tags are not likely to be assigned more than once for single image. 8In [21], they proposed the “video-only” deep autoencoder whose objective is to predict audio data and reconstruct video data when only video data is given as an input during the training. Our baseline model (MDRNN with 0 iterations) is similar, but different since we don’t have a reconstruction training objective. 7 skyline, indiana, 1855mm night, city, river, night, long exposure, city, lights, buildings, nikon, night, d80, asia, dark, buildings, skyline reflection, buildings, fireworks, skyscrapers skyline, hongkong, harbour massachusetts, boston sunset, explore, sun sunset, platinumphoto, sunset, sol, searchthebest, sunset canon, naturesfinest, 30d trees, silhouette atardecer, nubes, abigfave toys lego diy, robot toy, plastic, lego kitty, miniature Figure 3: Retrieval results with multimodal queries. The leftmost image-text pairs are multimodal query samples and those in the right side of the bar are retrieved samples with the highest similarities to the query sample from the database. We include more results in supplementary material. 4.3 PASCAL VOC 2007 We evaluate the proposed algorithm on PASCAL VOC 2007 database. The original dataset doesn’t contain user tags, but Guillaumin et al. [7] has collected the user tags from Flickr website.9 Motivated from the success of convolutional neural networks (CNNs) on large-scale visual object recognition [14], we used the DeCAF7 features [5] as an input features for visual pathway, where DeCAF7 is 4096 dimensional feature extracted from the CNN trained on ImageNet [4]. For text features, we used the vocabulary of size 804 suggested by [7]. For unsupervised feature learning of MDRNN, we used unlabeled data of MIR-Flickr database while converting the text features using the new vocabulary from PASCAL database. The network architecture used in this experiment is as follows: [4096, 1536, 1536] variables for the visual pathway, [804, 512, 1536] variables for the text pathway, and 2048 variables for top-layer joint network. Following the standard practice, we report the mAP over 20 object classes. The performance improvement of our proposed method was significant, achieving 81.5% mAP with multimodal queries and 76.2% mAP with unimodal queries, whereas the performance of baseline model was 74.5% mAP with multimodal queries (DeCAF7 + Text) and 74.3% mAP with unimodal queries (DeCAF7). 5 Conclusion Motivated from the property of good generative models of multimodal data, we proposed a novel multimodal deep learning framework based on variation of information. The minimum variation of information objective enables to learn a good shared representations of multiple heterogeneous data modalities with a better prediction of missing input modality. We demonstrated the effectiveness of our proposed method on multimodal RBM and its deep extensions and showed state-of-the-art recognition performance on MIR-Flickr database and competitive performance on PASCAL VOC 2007 database with multimodal (visual + text) and unimodal (visual only) queries. Acknowledgments This work was supported in part by ONR N00014-13-1-0762, Toyota, and Google Faculty Research Award. References [1] Y. Bengio, L. Yao, G. Alain, and P. Vincent. Generalized denoising auto-encoders as generative models. In NIPS, 2013. [2] Y. Bengio, E. Thibodeau-Laufer, G. Alain, and J. Yosinski. Deep generative stochastic networks trainable by backprop. In ICML, 2014. 9http://lear.inrialpes.fr/people/guillaumin/data.php 8 [3] A. Bosch, A. Zisserman, and X. Munoz. Image classification using random forests and ferns. In ICCV, 2007. [4] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei. ImageNet: A large-scale hierarchical image database. In CVPR, 2009. [5] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. DeCAF: A deep convolutional activation feature for generic visual recognition. CoRR, abs/1310.1531, 2013. [6] I. Goodfellow, M. Mirza, A. Courville, and Y. Bengio. Multi-prediction deep Boltzmann machines. In NIPS, 2013. [7] M. Guillaumin, J. Verbeek, and C. Schmid. Multimodal semi-supervised learning for image classification. In CVPR, 2010. [8] G. E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation, 14(8):1771–1800, 2002. [9] G. E. Hinton and R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504–507, 2006. [10] J. Huang and B. Kingsbury. Audio-visual deep learning for noise robust speech recognition. In ICASSP, 2013. [11] M. J. Huiskes and M. S. Lew. The MIR Flickr retrieval evaluation. In ICMIR, 2008. [12] M. J. Huiskes, B. Thomee, and M. S. Lew. New trends and ideas in visual concept detection: The MIR Flickr retrieval evaluation initiative. In ICMIR, 2010. [13] Y. Kim, H. Lee, and E. M. Provost. Deep learning for robust feature generation in audiovisual emotion recognition. In ICASSP, 2013. [14] A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural networks. In NIPS, 2012. [15] K. Lai, L. Bo, X. Ren, and D. Fox. RGB-D object recognition: Features, algorithms, and a large scale benchmark. In Consumer Depth Cameras for Computer Vision, pages 167–192. Springer, 2013. [16] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. [17] I. Lenz, H. Lee, and A. Saxena. Deep learning for detecting robotic grasps. In RSS, 2013. [18] B. S. Manjunath, J-R. Ohm, V. V. Vasudevan, and A. Yamada. Color and texture descriptors. IEEE Transactions on Circuits and Systems for Video Technology, 11(6):703–715, 2001. [19] V. Mnih, H. Larochelle, and G. E. Hinton. Conditional restricted boltzmann machines for structured output prediction. In UAI, 2011. [20] R. M. Neal. Learning stochastic feedforward networks. Department of Computer Science, University of Toronto, 1990. [21] J. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee, and A. Y. Ng. Multimodal deep learning. In ICML, 2011. [22] A. Oliva and A. Torralba. Modeling the shape of the scene: A holistic representation of the spatial envelope. International Journal of Computer Vision, 42(3):145–175, 2001. [23] D. Rao, M. De Deuge, N. Nourani-Vatani, B. Douillard, S. B. Williams, and O. Pizarro. Multimodal learning for autonomous underwater vehicles from visual and bathymetric data. In ICRA, 2014. [24] R. Salakhutdinov and G. E. Hinton. Deep Boltzmann machines. In AISTATS, 2009. [25] R. Salakhutdinov and G. E. Hinton. Replicated softmax: an undirected topic model. In NIPS, 2009. [26] H-C. Shin, M. R. Orton, D. J. Collins, S. J. Doran, and M. O. Leach. Stacked autoencoders for unsupervised feature learning and multiple organ detection in a pilot study using 4D patient data. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8):1930–1943, 2013. [27] N. Srivastava and R. Salakhutdinov. Multimodal learning with deep Boltzmann machines. In NIPS, 2012. [28] N. Srivastava and R. Salakhutdinov. Discriminative transfer learning with tree-based priors. In NIPS, 2013. [29] Y. Tang and R. Salakhutdinov. Learning stochastic feedforward neural networks. In NIPS, 2013. [30] T. Tieleman. Training restricted Boltzmann machines using approximations to the likelihood gradient. In ICML, 2008. [31] J. Verbeek, M. Guillaumin, T. Mensink, and C. Schmid. Image annotation with tagprop on the MIR Flickr set. In ICMIR, 2010. [32] A. Wang, J. Lu, G. Wang, J. Cai, and T-J. Cham. Multi-modal unsupervised feature learning for RGB-D scene labeling. In ECCV. Springer, 2014. 9
|
2014
|
199
|
5,290
|
Exploiting easy data in online optimization Amir Sani Gergely Neu Alessandro Lazaric SequeL team, INRIA Lille – Nord Europe, France {amir.sani,gergely.neu,alessandro.lazaric}@inria.fr Abstract We consider the problem of online optimization, where a learner chooses a decision from a given decision set and suffers some loss associated with the decision and the state of the environment. The learner’s objective is to minimize its cumulative regret against the best fixed decision in hindsight. Over the past few decades numerous variants have been considered, with many algorithms designed to achieve sub-linear regret in the worst case. However, this level of robustness comes at a cost. Proposed algorithms are often over-conservative, failing to adapt to the actual complexity of the loss sequence which is often far from the worst case. In this paper we introduce a general algorithm that, provided with a “safe” learning algorithm and an opportunistic “benchmark”, can effectively combine good worst-case guarantees with much improved performance on “easy” data. We derive general theoretical bounds on the regret of the proposed algorithm and discuss its implementation in a wide range of applications, notably in the problem of learning with shifting experts (a recent COLT open problem). Finally, we provide numerical simulations in the setting of prediction with expert advice with comparisons to the state of the art. 1 Introduction We consider a general class of online decision-making problems, where a learner sequentially decides which actions to take from a given decision set and suffers some loss associated with the decision and the state of the environment. The learner’s goal is to minimize its cumulative loss as the interaction between the learner and the environment is repeated. Performance is usually measured with regard to regret; that is, the difference between the cumulative loss of the algorithm and the best single decision over the horizon in the decision set. The objective of the learning algorithm is to guarantee that the per-round regret converges to zero as time progresses. This general setting includes a wide range of applications such as online linear pattern recognition, sequential investment and time series prediction. Numerous variants of this problem were considered over the last few decades, mainly differing in the shape of the decision set (see [6] for an overview). One of the most popular variants is the problem of prediction with expert advice, where the decision set is the N-dimensional simplex and the perround losses are linear functions of the learner’s decision. In this setting, a number of algorithms are known to guarantee regret of order √ T after T repetitions of the game. Another well-studied setting is online convex optimization (OCO), where the decision set is a convex subset of Rd and the loss functions are convex and smooth. Again, a number of simple algorithms are known to guarantee a worst-case regret of order √ T in this setting. These results hold for any (possibly adversarial) assignment of the loss sequences. Thus, these algorithms are guaranteed to achieve a decreasing per-round regret that approaches the performance of the best fixed decision in hindsight even in the worst case. Furthermore, these guarantees are unimprovable in the sense that there exist sequences of loss functions where the learner suffers Ω( √ T) regret no matter what algorithm the learner uses. However this robustness comes at a cost. These algorithms are often overconservative and fail to adapt to the actual complexity of the loss sequence, which in practice is often far from the worst 1 possible. In fact, it is well known that making some assumptions on the loss generating mechanism improves the regret guarantees. For instance, the simple strategy of following the leader (FTL, otherwise known as fictitious play in game theory, see, e.g., [6, Chapter 7]), which at each round picks the single decision that minimizes the total losses so far, guarantees O(log T) regret in the expert setting when assuming i.i.d. loss vectors. The same strategy also guarantees O(log T) regret in the OCO setting, when assuming all loss functions are strongly convex. On the other hand, the risk of using this strategy is that it’s known to suffer Ω(T) regret in the worst case. This paper focuses on how to distinguish between “easy” and “hard” problem instances, while achieving the best possible guarantees on both types of loss sequences. This problem recently received much attention in a variety of settings (see, e.g., [8] and [13]), but most of the proposed solutions required the development of ad-hoc algorithms for each specific scenario and definition of “easy” problem. Another obvious downside of such ad-hoc solutions is that their theoretical analysis is often quite complicated and difficult to generalize to more complex problems. In the current paper, we set out to define an algorithm providing a general structure that can be instantiated in a wide range of settings by simply plugging in the most appropriate choice of two algorithms for learning on “easy” and “hard” problems. Aside from exploiting easy data, our method has other potential applications. For example, in some sensitive applications we may want to protect ourselves from complete catastrophe, rather than take risks for higher payoffs. In fact, our work builds directly on the results of Even-Dar et al. [9], who point out that learning algorithms in the experts setting may fail to satisfy the rather natural requirement of performing strictly better than a trivial algorithm that merely decides on which expert to follow by uniform coin flips. While Even-Dar et al. propose methods that achieve this goal, they leave open an obvious open question. Is it possible to strictly improve the performance of an existing (and possibly na¨ıve) solution by means of principled online learning methods? This problem can be seen as the polar opposite of failing to exploit easy data. In this paper, we push the idea of Even-Dar et al. one step further. We construct learning algorithms with order-optimal regret bounds, while also guaranteeing that their cumulative loss is within a constant factor of some pre-defined strategy referred to as the benchmark. We stress that this property is much stronger than simply guaranteeing O(1) regret with respect to some fixed distribution D as done by Even-Dar et al. [9] since we allow comparisons to any fixed strategy that is even allowed to learn. Our method guarantees that replacing an existing solution can be done at a negligible price in terms of output performance with additional strong guarantees on the worst-case performance. However, in what follows, we will only regard this aspect of our results as an interesting consequence while emphasizing the ability of our algorithm to exploit easy data. Our general structure, referred to as (A, B)-PROD, receives a learning algorithm A and a benchmark B as input. Depending on the online optimization setting, it is enough to set A to any learning algorithm with performance guarantees on “hard” problems and B to an opportunistic strategy exploiting the structure of “easy” problems. (A, B)-PROD smoothly mixes the decisions of A and B, achieving the best possible guarantees of both. 2 Online optimization with a benchmark Parameters: set of decisions S, number of rounds T; For all t = 1, 2, . . . , T, repeat 1. The environment chooses loss function ft : S →[0, 1]. 2. The learner chooses a decision xt ∈S. 3. The environment reveals ft (possibly chosen depending on the past history of losses and decisions). 4. The forecaster suffers loss ft(xt). Figure 1: The protocol of online optimization. We now present the formal setting and an algorithm for online optimization with a benchmark. The interaction protocol between the learner and the environment is formally described on Figure 1. The online optimization problem is characterized by the decision set S and the class F ⊆[0, 1]S of loss functions utilized by the environment. The performance of the learner is usually measured in terms of the regret, defined as RT = supx∈S T t=1 ft(xt) −ft(x) . We say that an algorithm learns if it makes decisions so that RT = o(T). 2 Let A and B be two online optimization algorithms that map observation histories to decisions in a possibly randomized fashion. For a formal definition, we fix a time index t ∈[T] = {1, 2, . . . , T} and define the observation history (or in short, the history) at the end of round t −1 as Ht−1 = (f1, . . . , ft−1). H0 is defined as the empty set. Furthermore, define the random variables Ut and Vt, drawn from the standard uniform distribution, independently of Ht−1 and each other. The learning algorithms A and B are formally defined as mappings from F∗× [0, 1] to S with their respective decisions given as at def = A(Ht−1, Ut) and bt def = B(Ht−1, Vt). Finally, we define a hedging strategy C that produces a decision xt based on the history of decisions proposed by A and B, with the possible help of some external randomness represented by the uniform random variable Wt as xt = C at, bt, H∗ t−1, Wt . Here, H∗ t−1 is the simplified history consisting of f1(a1), f1(b1), . . . , ft−1(at−1), ft−1(bt−1) and C bases its decisions only on the past losses incurred by A and B without using any further information on the loss functions. The total expected loss of C is defined as LT (C) = E[T t=1 ft(xt)], where the expectation integrates over the possible realizations of the internal randomization of A, B and C. The total expected losses of A, B and any fixed decision x ∈S are similarly defined. Our goal is to define a hedging strategy with low regret against a benchmark strategy B, while also enjoying near-optimal guarantees on the worst-case regret against the best decision in hindsight. The (expected) regret of C against any fixed decision x ∈S and against the benchmark, are defined as RT (C, x) = E T t=1 ft(xt) −ft(x) , RT (C, B) = E T t=1 ft(xt) −ft(bt) . Input: learning rate η ∈(0, 1/2], initial weights {w1,A, w1,B}, num. of rounds T; For all t = 1, 2, . . . , T, repeat 1. Let st = wt,A wt,A + w1,B . 2. Observe at and bt and predict xt = at with probability st, bt otherwise. 3. Observe ft and suffer loss ft(xt). 4. Feed ft to A and B. 5. Compute δt = ft(bt) −ft(at) and set wt+1,A = wt,A · (1 + ηδt) . Figure 2: (A, B)-PROD Our hedging strategy, (A, B)-PROD, is based on the classic PROD algorithm popularized by Cesa-Bianchi et al. [7] and builds on a variant of PROD called DPROD, proposed in Even-Dar et al. [9], which (when properly tuned) achieves constant regret against the performance of a fixed distribution D over experts, while guaranteeing O( √ T log T) regret against the best expert in hindsight. Our variant (A, B)-PROD (shown in Figure 2) is based on the observation that it is not necessary to use a fixed distribution D in the definition of the benchmark, but actually any learning algorithm or signal can be used as a baseline. (A, B)-PROD maintains two weights, balancing the advice of learning algorithm A and a benchmark B. The benchmark weight is defined as w1,B ∈(0, 1) and is kept unchanged during the entire learning process. The initial weight assigned to A is w1,A = 1 −w1,B, and in the remaining rounds t = 2, 3, . . . , T is updated as wt,A = w1,A t−1 s=1 1 −η fs(as) −fs(bs) , where the difference between the losses of A and B is used. Output xt is set to at with probability st = wt,A/(wt,A+w1,B), otherwise it is set to bt.1 The following theorem states the performance guarantees for (A, B)-PROD. Theorem 1 (cf. Lemma 1 in [9]). For any assignment of the loss sequence, the total expected loss of (A, B)-PROD initialized with weights w1,B ∈(0, 1) and w1,B = 1 −w1,A simultaneously satisfies LT (A, B)-PROD ≤LT (A) + η T t=1 ft(bt) −ft(at) 2 −log w1,A η and LT (A, B)-PROD ≤LT (B) −log w1,B η . 1For convex decision sets S and loss families F, one can directly set xt = stat + (1 −st)bt at no expense. 3 The proof directly follows from the PROD analysis of Cesa-Bianchi et al. [7]. Next, we suggest a parameter setting for (A, B)-PROD that guarantees constant regret against the benchmark B and O(√T log T) regret against the learning algorithm A in the worst case. Corollary 1. Let C ≥1 be an upper bound on the total benchmark loss LT (B). Then setting η = 1/2 · (log C)/C < 1/2 and w1,B = 1 −w1,A = 1 −η simultaneously guarantees RT (A, B)-PROD, x ≤RT (A, x) + 2 C log C for any x ∈S and RT (A, B)-PROD, B ≤2 log 2 against any assignment of the loss sequence. Notice that for any x ∈S, the previous bounds can be written as RT ((A, B)-PROD, x) ≤min RT (A, x) + 2 C log C, RT (B, x) + 2 log 2
, which states that (A, B)-PROD achieves the minimum between the regret of the benchmark B and learning algorithm A plus an additional regret of O(√C log C). If we consider that in most online optimization settings, the worst-case regret for a learning algorithm is O( √ T), the previous bound shows that at the cost of an additional factor of O(√T log T) in the worst case, (A, B)-PROD performs as well as the benchmark, which is very useful whenever RT (B, x) is small. This suggests that if we set A to a learning algorithm with worst-case guarantees on “difficult” problems and B to an algorithm with very good performance only on “easy” problems, then (A, B)-PROD successfully adapts to the difficulty of the problem by finding a suitable mixture of A and B. Furthermore, as discussed by Even-Dar et al. [9], we note that in this case the PROD update rule is crucial to achieve this result: any algorithm that bases its decisions solely on the cumulative difference between ft(at) and ft(bt) is bound to suffer an additional regret of O( √ T) on both A and B. While HEDGE and follow-the-perturbed-leader (FPL) both fall into this category, it can be easily seen that this is not the case for PROD. A similar observation has been made by de Rooij et al. [8], who discuss the possibility of combining a robust learning algorithm and FTL by HEDGE and conclude that this approach is insufficient for their goals – see also Sect. 3.1. Finally, we note that the parameter proposed in Corollary 1 can hardly be computed in practice, since an upper-bound on the loss of the benchmark LT (B) is rarely available. Fortunately, we can adapt an improved version of PROD with adaptive learning rates recently proposed by Gaillard et al. [11] and obtain an anytime version of (A, B)-PROD. The resulting algorithm and its corresponding bounds are reported in App. B. 3 Applications The following sections apply our results to special cases of online optimization. Unless otherwise noted, all theorems are direct consequences of Corollary 1 and thus their proofs are omitted. 3.1 Prediction with expert advice We first consider the most basic online optimization problem of prediction with expert advice. Here, S is the N-dimensional simplex ΔN = x ∈RN + : N i=1 xi = 1 and the loss functions are linear, that is, the loss of any decision x ∈ΔN in round t is given as the inner product ft(x) = xt and t ∈[0, 1]N is the loss vector in round t. Accordingly, the family F of loss functions can be equivalently represented by the set [0, 1]N. Many algorithms are known to achieve the optimal regret guarantee of O(√T log N) in this setting, including HEDGE (so dubbed by Freund and Schapire [10], see also the seminal works of Littlestone and Warmuth [20] and Vovk [23]) and the follow-the-perturbed-leader (FPL) prediction method of Hannan [16], later rediscovered by Kalai and Vempala [19]. However, as de Rooij et al. [8] note, these algorithms are usually too conservative to exploit “easily learnable” loss sequences and might be significantly outperformed by a simple strategy known as follow-the-leader (FTL), which predicts bt = arg minx∈S x t−1 s=1 s. For instance, FTL is known to be optimal in the case of i.i.d. losses, where it achieves a regret of O(log T). As a direct consequence of Corollary 1, we can use the general structure of (A, B)-PROD to match the performance of FTL on easy data, and at the same time, obtain the same worst-case guarantees of standard algorithms for prediction with expert advice. In particular, if we set FTL as the benchmark B and ADAHEDGE (see [8]) as the learning algorithm A, we obtain the following. 4 Theorem 2. Let S = ΔN and F = [0, 1]N. Running (A, B)-PROD with A = ADAHEDGE and B = FTL, with the parameter setting suggested in Corollary 1 simultaneously guarantees RT (A, B)-PROD, x ≤RT (ADAHEDGE, x) + 2 C log C ≤ L∗ T (T −L∗ T ) T log N + 2 C log C for any x ∈S, where L∗ T = minx∈ΔN LT (x), and RT (A, B)-PROD, FTL ≤2 log 2. against any assignment of the loss sequence. While we recover the worst-case guarantee of O(√T log N) plus an additional regret O(√T log T) on “hard” loss sequences, on “easy” problems we inherit the good performance of FTL. Comparison with FLIPFLOP. The FLIPFLOP algorithm proposed by de Rooij et al. [8] addresses the problem of constructing algorithms that perform nearly as well as FTL on easy problems while retaining optimal guarantees on all possible loss sequences. More precisely, FLIPFLOP is a HEDGE algorithm where the learning rate η alternates between infinity (corresponding to FTL) and the value suggested by ADAHEDGE depending on the cumulative mixability gaps over the two regimes. The resulting algorithm is guaranteed to achieve the regret guarantees of RT (FLIPFLOP, x) ≤5.64RT (FTL, x) + 3.73 and RT (FLIPFLOP, x) ≤5.64 L∗ T (T −L∗ T ) T log N + O(log N) against any fixed x ∈ΔN at the same time. Notice that while the guarantees in Thm. 2 are very similar in nature to those of de Rooij et al. [8] concerning FLIPFLOP, the two results are slightly different. The first difference is that our worst-case bounds are inferior to theirs by a factor of order √T log T.2 On the positive side, our guarantees are much stronger when FTL outperforms ADAHEDGE. To see this, observe that their regret bound can be rewritten as LT (FLIPFLOP) ≤LT (FTL) + 4.64 LT (FTL) −infxLT (x) + 3.73, whereas our result replaces the last two terms by 2 log 2.3 The other advantage of our result is that we can directly bound the total loss of our algorithm in terms of the total loss of ADAHEDGE (see Thm. 1). This is to be contrasted with the result of de Rooij et al. [8], who upper bound their regret in terms of the regret bound of ADAHEDGE, which may not be tight and be much worse in practice than the actual performance of ADAHEDGE. All these advantages of our approach stem from the fact that we smoothly mix the predictions of ADAHEDGE and FTL, while FLIPFLOP explicitly follows one policy or the other for extended periods of time, potentially accumulating unnecessary losses when switching too late or too early. Finally, we note that as FLIPFLOP is a sophisticated algorithm specifically designed for balancing the performance of ADAHEDGE and FTL in the expert setting, we cannot reasonably hope to beat its performance in every respect by using our general-purpose algorithm. Notice however that the analysis of FLIPFLOP is difficult to generalize to other learning settings such as the ones we discuss in the sections below. Comparison with D-PROD. In the expert setting, we can also use a straightforward modification of the D-PROD algorithm originally proposed by Even-Dar et al. [9]: This variant of PROD includes the benchmark B in ΔN as an additional expert and performs PROD updates for each base expert using the difference between the expert and benchmark losses. While the worst-case regret of this algorithm is of O(√C log C log N), which is asymptotically inferior to the guarantees given by Thm. 2, D-PROD also has its merits in some special cases. For instance, in a situation where the total loss of FTL and the regret of ADAHEDGE are both Θ( √ T), D-PROD guarantees a regret of O(T 1/4) while the (A, B)-PROD guarantee remains O( √ T). 2In fact, the worst case for our bound is realized when C = Ω(T), which is precisely the case when ADAHEDGE has excellent performance as it will be seen in Sect. 4. 3While one can parametrize FLIPFLOP so as to decrease the gap between these bounds, the bound on LT (FLIPFLOP) is always going to be linear in RT (FLIPFLOP, x). 5 3.2 Tracking the best expert We now turn to the problem of tracking the best expert, where the goal of the learner is to control the regret against the best fixed strategy that is allowed to change its prediction at most K times during the entire decision process (see, e.g., [18, 14]). The regret of an algorithm A producing predictions a1, . . . , aT against an arbitrary sequence of decisions y1:T ∈ST is defined as RT (A, y1:T ) = T t=1 ft(at) −ft(yt) . Regret bounds in this setting typically depend on the complexity of the sequence y1:T as measured by the number decision switches C(y1:T ) = {t ∈{2, . . . , T} : yt = yt−1}. For example, a properly tuned version of the FIXED-SHARE (FS) algorithm of Herbster and Warmuth [18] guarantees that RT (FS, y1:T ) = O C(y1:T )√T log N . This upper bound can be tightened to O(√KT log N) when the learner knows an upper bound K on the complexity of y1:T . While this bound is unimprovable in general, one might wonder if it is possible to achieve better performance when the loss sequence is easy. This precise question was posed very recently as a COLT open problem by Warmuth and Koolen [24]. The generality of our approach allows us to solve their open problem by using (A, B)-PROD as a master algorithm to combine an opportunistic strategy with a principled learning algorithm. The following theorem states the performance of the (A, B)-PROD-based algorithm. Theorem 3. Let S = ΔN, F = [0, 1]N and y1:T be any sequence in S with known complexity K = C(y1:T ). Running (A, B)-PROD with an appropriately tuned instance of A = FS (see [18]), with the parameter setting suggested in Corollary 1 simultaneously guarantees RT (A, B)-PROD, y1:T ≤RT (FS, y1:T ) + 2 C log C = O( KT log N) + 2 C log C for any x ∈S and RT (A, B)-PROD, B ≤2 log 2. against any assignment of the loss sequence. The remaining problem is then to find a benchmark that works well on “easy” problems, notably when the losses are i.i.d. in K (unknown) segments of the rounds 1, . . . , T. Out of the strategies suggested by Warmuth and Koolen [24], we analyze a windowed variant of FTL (referred to as FTL(w)) that bases its decision at time t on losses observed in the time window [t−w−1, t−1] and picks expert bt = arg minx∈ΔN x t−1 s=t−w−1 s. The next proposition (proved in the appendix) gives a performance guarantee for FTL(w) with an optimal parameter setting. Proposition 1. Assume that there exists a partition of [1, T] into K intervals such that the losses are generated i.i.d. within each interval. Furthermore, assume that the expectation of the loss of the best expert within each interval is at least δ away from the expected loss of all other experts. Then, setting w = 4 log(NT/K)/δ2 , the regret of FTL(w) is upper bounded for any y1:T as E RT (FTL(w), y1:T ) ≤4K δ2 log(NT/K) + 2K, where the expectation is taken with respect to the distribution of the losses. 3.3 Online convex optimization Here we consider the problem of online convex optimization (OCO), where S is a convex and closed subset of Rd and F is the family of convex functions on S. In this setting, if we assume that the loss functions are smooth (see [25]), an appropriately tuned version of the online gradient descent (OGD) is known to achieve a regret of O( √ T). As shown by Hazan et al. [17], if we additionally assume that the environment plays strongly convex loss functions and tune the parameters of the algorithm accordingly, the same algorithm can be used to guarantee an improved regret of O(log T). Furthermore, they also show that FTL enjoys essentially the same guarantees. The question whether the two guarantees can be combined was studied by Bartlett et al. [4], who present the adaptive online gradient descent (AOGD) algorithm that guarantees O(log T) regret when the aggregated loss functions Ft = t s=1 fs are strongly convex for all t, while retaining the O( √ T) bounds if this is not the case. The next theorem shows that we can replace their complicated analysis by our general argument and show essentially the same guarantees. 6 Theorem 4. Let S be a convex closed subset of Rd and F be the family of smooth convex functions on S. Running (A, B)-PROD with an appropriately tuned instance of A = OGD (see [25]) and B = FTL, with the parameter setting suggested in Corollary 1 simultaneously guarantees RT (A, B)-PROD, x ≤RT (OGD, x) + 2 C log C = O( √ T) + 2 C log C for any x ∈S and RT (A, B)-PROD, FTL ≤2 log 2. against any assignment of the loss sequence. In particular, this implies that RT (A, B)-PROD, x = O(log T) if the loss functions are strongly convex. Similar to the previous settings, at the cost of an additional regret of O(√T log T) in the worst case, (A, B)-PROD successfully adapts to the “easy” loss sequences, which in this case corresponds to strongly convex functions, on which it achieves a O(log T) regret. 3.4 Learning with two-points-bandit feedback We consider the multi-armed bandit problem with two-point feedback, where we assume that in each round t, the learner picks one arm It in the decision set S = {1, 2, . . . , K} and also has the possibility to choose and observe the loss of another arm Jt. The learner suffers the loss ft(It). Unlike the settings considered in the previous sections, the learner only gets to observe the loss function for arms It and Jt. This is a special case of the partial-information game recently studied by Seldin et al. [21]. A similar model has also been studied as a simplified version of online convex optimization with partial feedback [1]. While this setting does not entirely conform to our assumptions concerning A and B, observe that a hedging strategy C defined over A and B only requires access to the losses suffered by the two algorithms and not the entire loss functions. Formally, we give A and B access to the decision set S, and C to S2. The hedging strategy C selects the pair (It, Jt) based on the arms suggested by A and B as: (It, Jt) = (at, bt) with probability st, (bt, at) with probability 1 −st. The probability st is a well-defined deterministic function of H∗ t−1, thus the regret bound of (A, B)PROD can be directly applied. In this case, “easy” problems correspond to i.i.d. loss sequences (with a fixed gap between the expected losses), for which the UCB algorithm of Auer et al. [2] is guaranteed to have a O(log T) regret, while on “hard” problems, we can rely on the EXP3 algorithm of Auer et al. [3] which suffers a regret of O( √ TK) in the worst case. The next theorem gives the performance guarantee of (A, B)-PROD when combining UCB and EXP3. Theorem 5. Consider the multi-armed bandit problem with K arms and two-point feedback. Running (A, B)-PROD with an appropriately tuned instance of A = EXP3 (see [3]) and B = UCB (see [2]), with the parameter setting suggested in Corollary 1 simultaneously guarantees RT (A, B)-PROD, x ≤RT (EXP3, x) + 2 C log C = O( TK log K) + 2 C log C for any arm x ∈{1, 2, . . . , K} and RT (A, B)-PROD, UCB ≤2 log 2. against any assignment of the loss sequence. In particular, if the losses are generated in an i.i.d. fashion and there exists a unique best arm x∗∈S, then E RT (A, B)-PROD, x = O(log T), where the expectation is taken with respect to the distribution of the losses. This result shows that even in the multi-armed bandit setting, we can achieve nearly the best performance in both “hard” and “easy” problems given that we are allowed to pull two arms at the time. This result is to be contrasted with those of Bubeck and Slivkins [5], later improved by Seldin and Slivkins [22], who consider the standard one-point feedback setting. The algorithm of Seldin and Slivkins, called EXP3++ is a variant of the EXP3 algorithm that simultaneously guarantees O(log2 T) regret in stochastic environments while retaining the regret bound of O(√TK log K) in the adversarial setting. While our result holds under stronger assumptions, Thm. 5 shows that (A, B)-PROD is not restricted to work only in full-information settings. Once again, we note that such a result cannot be obtained by simply combining the predictions of UCB and EXP3 by a generic learning algorithm as HEDGE. 7 4 Empirical Results 200 400 600 800 1000 1200 1400 1600 1800 2000 0 10 20 30 40 50 60 Time Regret Setting 1 FTL Adahedge FlipFlop D-Prod (A, B)-Prod (A, B)-Hedge 200 400 600 800 1000 1200 1400 1600 1800 2000 0 1 2 3 4 5 6 7 8 9 10 Time Regret Setting 2 FTL Adahedge FlipFlop D-Prod (A, B)-Prod (A, B)-Hedge 200 400 600 800 1000 1200 1400 1600 1800 2000 0 1 2 3 4 5 6 7 8 9 10 Time Regret Setting 3 FTL Adahedge FlipFlop D-Prod (A, B)-Prod (A, B)-Hedge 200 400 600 800 1000 1200 1400 1600 1800 2000 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 Time Regret Setting 4 FTL Adahedge FlipFlop D-Prod (A, B)-Prod (A, B)-Hedge Figure 3: Hand tuned loss sequences from de Rooij et al. [8] We study the performance of (A, B)-PROD in the experts setting to verify the theoretical results of Thm. 2, show the importance of the (A, B)-PROD weight update rule and compare to FLIPFLOP. We report the performance of FTL, ADAHEDGE, FLIPFLOP, and B = FTL and A = ADAHEDGE for the anytime versions of D-PROD, (A, B)-PROD, and (A, B)-HEDGE, a variant of (A, B)-PROD where an exponential weighting scheme is used. We consider the two-expert settings defined by de Rooij et al. [8] where deterministic loss sequences of T = 2000 steps are designed to obtain different configurations. (We refer to [8] for a detailed specification of the settings.) The results are reported in Figure 3. The first remark is that the performance of (A, B)-PROD is always comparable with the best algorithm between A and B. In setting 1, although FTL suffers linear regret, (A, B)-PROD rapidly adjusts the weights towards ADAHEDGE and finally achieves the same order of performance. In settings 2 and 3, the situation is reversed since FTL has a constant regret, while ADAHEDGE has a regret of order of √ T. In this case, after a short initial phase where (A, B)-PROD has an increasing regret, it stabilizes on the same performance as FTL. In setting 4 both ADAHEDGE and FTL have a constant regret and (A, B)-PROD attains the same performance. These results match the behavior predicted in the bound of Thm. 2, which guarantees that the regret of (A, B)-PROD is roughly the minimum of FTL and ADAHEDGE. As discussed in Sect. 2, the PROD update rule used in (A, B)-PROD plays a crucial role to obtain a constant regret against the benchmark, while other rules, such as the exponential update used in (A, B)-HEDGE, may fail in finding a suitable mix between A and B. As illustrated in settings 2 and 3, (A, B)-HEDGE suffers a regret similar to ADAHEDGE and it fails to take advantage of the good performance of FTL, which has a constant regret. In setting 1, (A, B)-HEDGE performs as well as (A, B)-PROD because FTL is constantly worse than ADAHEDGE and its corresponding weight is decreased very quickly, while in setting 4 both FTL and ADAHEDGE achieves a constant regret and so does (A, B)-HEDGE. Finally, we compare (A, B)-PROD and FLIPFLOP. As discussed in Sect. 2, the two algorithms share similar theoretical guarantees with potential advantages of one on the other depending on the specific setting. In particular, FLIPFLOP performs slightly better in settings 2, 3, and 4, whereas (A, B)-PROD obtains smaller regret in setting 1, where the constants in the FLIPFLOP bound show their teeth. While it is not possible to clearly rank the two algorithms, (A, B)-PROD clearly avoids the pathological behavior exhibited by FLIPFLOP in setting 1. Finally, we note that the anytime version of D-PROD is slightly better than (A, B)-PROD, but no consistent difference is observed. 5 Conclusions We introduced (A, B)-PROD, a general-purpose algorithm which receives a learning algorithm A and a benchmark strategy B as inputs and guarantees the best regret between the two. We showed that whenever A is a learning algorithm with worst-case performance guarantees and B is an opportunistic strategy exploiting a specific structure within the loss sequence, we obtain an algorithm which smoothly adapts to “easy” and “hard” problems. We applied this principle to a number of different settings of online optimization, matching the performance of existing ad-hoc solutions (e.g., AOGD in convex optimization) and solving the open problem of learning on “easy” loss sequences in the tracking the best expert setting proposed by Warmuth and Koolen [24]. We point out that the general structure of (A, B)-PROD could be instantiated in many other settings and scenarios in online optimization, such as learning with switching costs [12, 15], and, more generally, in any problem where the objective is to improve over a given benchmark strategy. The main open problem is the extension of our techniques to work with one-point bandit feedback. Acknowledgements This work was supported by the French Ministry of Higher Education and Research and by the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement 270327 (project CompLACS), and by FUI project Herm`es. 8 References [1] Agarwal, A., Dekel, O., and Xiao, L. (2010). Optimal algorithms for online convex optimization with multipoint bandit feedback. In Kalai, A. and Mohri, M., editors, Proceedings of the 23rd Annual Conference on Learning Theory (COLT 2010), pages 28–40. [2] Auer, P., Cesa-Bianchi, N., and Fischer, P. (2002a). Finite-time analysis of the multiarmed bandit problem. Mach. Learn., 47(2-3):235–256. [3] Auer, P., Cesa-Bianchi, N., Freund, Y., and Schapire, R. E. (2002b). The nonstochastic multiarmed bandit problem. SIAM J. Comput., 32(1):48–77. [4] Bartlett, P. L., Hazan, E., and Rakhlin, A. (2008). Adaptive online gradient descent. In Platt, J. C., Koller, D., Singer, Y., and Roweis, S. T., editors, Advances in Neural Information Processing Systems 20, pages 65–72. Curran Associates. (December 3–6, 2007). [5] Bubeck, S. and Slivkins, A. (2012). The best of both worlds: Stochastic and adversarial bandits. In COLT, pages 42.1–42.23. [6] Cesa-Bianchi, N. and Lugosi, G. (2006). Prediction, Learning, and Games. Cambridge University Press, New York, NY, USA. [7] Cesa-Bianchi, N., Mansour, Y., and Stoltz, G. (2007). Improved second-order bounds for prediction with expert advice. Machine Learning, 66(2-3):321–352. [8] de Rooij, S., van Erven, T., Gr¨unwald, P. D., and Koolen, W. M. (2014). Follow the leader if you can, hedge if you must. Accepted to the Journal of Machine Learning Research. [9] Even-Dar, E., Kearns, M., Mansour, Y., and Wortman, J. (2008). Regret to the best vs. regret to the average. Machine Learning, 72(1-2):21–37. [10] Freund, Y. and Schapire, R. E. (1997). A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55:119–139. [11] Gaillard, P., Stoltz, G., and van Erven, T. (2014). A second-order bound with excess losses. In Balcan, M.-F. and Szepesv´ari, Cs., editors, Proceedings of The 27th Conference on Learning Theory, volume 35 of JMLR Proceedings, pages 176–196. JMLR.org. [12] Geulen, S., V¨ocking, B., and Winkler, M. (2010). Regret minimization for online buffering problems using the weighted majority algorithm. In COLT, pages 132–143. [13] Grunwald, P., Koolen, W. M., and Rakhlin, A., editors (2013). NIPS Workshop on “Learning faster from easy data”. [14] Gy¨orgy, A., Linder, T., and Lugosi, G. (2012). Efficient tracking of large classes of experts. IEEE Transactions on Information Theory, 58(11):6709–6725. [15] Gy¨orgy, A. and Neu, G. (2013). Near-optimal rates for limited-delay universal lossy source coding. Submitted to the IEEE Transactions on Information Theory. [16] Hannan, J. (1957). Approximation to Bayes risk in repeated play. Contributions to the theory of games, 3:97–139. [17] Hazan, E., Agarwal, A., and Kale, S. (2007). Logarithmic regret algorithms for online convex optimization. Machine Learning, 69:169–192. [18] Herbster, M. and Warmuth, M. (1998). Tracking the best expert. Machine Learning, 32:151–178. [19] Kalai, A. and Vempala, S. (2005). Efficient algorithms for online decision problems. Journal of Computer and System Sciences, 71:291–307. [20] Littlestone, N. and Warmuth, M. (1994). The weighted majority algorithm. Information and Computation, 108:212–261. [21] Seldin, Y., Bartlett, P., Crammer, K., and Abbasi-Yadkori, Y. (2014). Prediction with limited advice and multiarmed bandits with paid observations. In Proceedings of the 30th International Conference on Machine Learning (ICML 2013), page 280287. [22] Seldin, Y. and Slivkins, A. (2014). One practical algorithm for both stochastic and adversarial bandits. In Proceedings of the 30th International Conference on Machine Learning (ICML 2014), pages 1287–1295. [23] Vovk, V. (1990). Aggregating strategies. In Proceedings of the third annual workshop on Computational learning theory (COLT), pages 371–386. [24] Warmuth, M. and Koolen, W. (2014). Shifting experts on easy data. COLT 2014 open problem. [25] Zinkevich, M. (2003). Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the Twentieth International Conference on Machine Learning (ICML). 9
|
2014
|
2
|
5,291
|
Extracting Certainty from Uncertainty: Transductive Pairwise Classification from Pairwise Similarities Tianbao Yang†, Rong Jin‡♮ †The University of Iowa, Iowa City, IA 52242 ‡Michigan State University, East Lansing, MI 48824 ♮Alibaba Group, Hangzhou 311121, China tianbao-yang@uiowa.edu, rongjin@msu.edu Abstract In this work, we study the problem of transductive pairwise classification from pairwise similarities 1. The goal of transductive pairwise classification from pairwise similarities is to infer the pairwise class relationships, to which we refer as pairwise labels, between all examples given a subset of class relationships for a small set of examples, to which we refer as labeled examples. We propose a very simple yet effective algorithm that consists of two simple steps: the first step is to complete the sub-matrix corresponding to the labeled examples and the second step is to reconstruct the label matrix from the completed sub-matrix and the provided similarity matrix. Our analysis exhibits that under several mild preconditions we can recover the label matrix with a small error, if the top eigen-space that corresponds to the largest eigenvalues of the similarity matrix covers well the column space of label matrix and is subject to a low coherence, and the number of observed pairwise labels is sufficiently enough. We demonstrate the effectiveness of the proposed algorithm by several experiments. 1 Introduction Pairwise classification aims to determine if two examples belong to the same class. It has been studied in several different contexts, depending on what prior information is provided. In this paper, we tackle the pairwise classification problem provided with a pairwise similarity matrix and a small set of true pairwise labels. We refer to the problem as transductive pairwise classification from pairwise similarities. The problem has many applications in real world situations. For example, in network science [17], an interesting task is to predict whether a link between two nodes is likely to occur given a snapshot of a network and certain similarities between the nodes. In computational biology [16], an important problem is to predict whether two protein sequences belong to the same family based on their sequence similarities, with some partial knowledge about protein families available. In computer vision, a good application can been found in face verification [5], which aims to verify whether two face images belong to the same identity given some pairs of training images. The challenge in solving the problem arises from the uncertainty of the given pairwise similarities in reflecting the pairwise labels. Therefore the naive approach by binarizing the similarity values with a threshold would suffer from a bad performance. One common approach towards the problem is to cast the problem into a clustering problem and derive the pairwise labels from the clustering results. Many algorithms have been proposed to cluster the data using the pairwise similarities and a subset of pairwise labels. However, the success of these algorithms usually depends on how many pairwise labels are provided and how well the pairwise similarities reflect the true pairwise labels as well. 1The pairwise similarities are usually derived from some side information instead of the underlying class labels. 1 In this paper, we focus on the theoretical analysis of the problem. Essentially, we answer the question of what property the similarity matrix should satisfy and how many pre-determined pairwise labels are sufficient in order to recover the true pairwise labels between all examples. We base our analysis on a very simple scheme which is composed of two steps: (i) the first step recovers the sub-matrix of the label matrix from the pre-determined entries by matrix completion, which has been studied extensively and can be solved efficiently; (ii) the second step estimates the full label matrix by simple matrix products based on the top eigen-space of the similarity matrix and the completed sub-matrix. Our empirical studies demonstrate that the proposed algorithm could be effective than spectral clustering and kernel alignment approach in exploring the pre-determined labels and the provided similarities. To summarize our theoretical results: under some appropriate pre-conditions, namely the distribution of data over the underlying classes in hindsight is well balanced, the labeled data are uniformly sampled from all data and the pre-determined pairwise labels are uniformly sampled from all pairs between the labeled examples, we can recover the label matrix with a small error if (i) the top eigenspace that corresponds to the s largest eigen-values of the similarity matrix covers well the column space of the label matrix and has a low incoherence, and (ii) the number of pre-determined pairwise labels N on m labeled examples satisfy N ≥Ω(m log2(m)), m ≥Ω(µss log s), where µs is a coherence measure of the top eigen-space of the similarity matrix. 2 Related Work The transductive pairwise classification problem is closely related to semi-supervised clustering, where a set of pairwise labels are provided with pairwise similarities or feature vectors to cluster a set of data points. We focus our attention on the works where the pairwise similarities instead of the feature vectors are served as inputs. Spectral clustering [19] and kernel k-means [7] are probably the most widely applied clustering algorithms given a similarity matrix or a kernel matrix. In spectral clustering, one first computes the top eigen-vectors of a similarity matrix (or bottom eigen-vectors of a Laplacian matrix), and then cluster the eigen-matrix into a pre-defined number of clusters. Kernel k-means is a variant of k-means that computes the distances using the kernel similarities. One can easily derive the pairwise labels from the clustering results by assuming that if two data points assigned to the same cluster belong to the same class and vice versa. To utilize some pre-determined pairwise labels, one can normalize the similarities and replace the entries corresponding to the observed pairs with the provided labels. There also exist some works that try to learn a parametric or non-parametric kernel from the predetermined pairwise labels and the pairwise similarities. Hoi et al. [13] proposed to learn a parametric kernel that is characterized by a combination of the top eigen-vectors of a (kernel) similarity matrix by maximizing a kernel alignment measure over the combination weights. Other works [2, 6] that exploit the pairwise labels for clustering are conducted using feature vector representations of data points. However, all of these works are lack of analysis of algorithms, which is important from a theoretical point. There also exist a large body of research on preference learning and ranking in semi-supervised or transductive setting [1, 14]. We did not compare with them because that the ground-truth we analyzed of a pair of data denoted by h(u, v) is a symmetric function, i.e., h(u, v) = h(v, u), while in preference learning the function h(u, v) is an asymmetric function. Our theoretical analysis is built on several previous studies on matrix completion and matrix reconstruction by random sampling. Cand`es and Recht [3] cooked a theory of matrix completion from partial observations that provides a theoretical guarantee of perfect recovery of a low rank matrix under appropriate conditions on the matrix and the number of observations. Several works [23, 10, 15, 28] analyzed the approximation error of the Nystr¨om method that approximates a kernel matrix by sampling a small number of columns. All of these analyses exploit an important measure of an orthogonal matrix, i.e., matrix incoherence, which also plays an important role in our analysis. It has been brought to our attention that two recent works [29, 26] are closely related to the present work but with remarkable differences. Both works present a matrix completion theory with side information. Yi et al. [29] aim to complete the pairwise label matrix given partially observed entries for semi-supervised clustering. Under the assumption that the column space of the symmetric 2 pairwise label matrix to be completed is spanned by the top left singular vectors of the data matrix, they show that their algorithm can perfectly recover the pairwise label matrix with a high probability. In [26], the authors assume that the column and row space of the matrix to be completed is given aprior and show that the required number of observations in order to perfectly complete the matrix can be reduced substantially. There are two remarkable differences between [29, 26] and our work: (i) we target on a transductive setting, in which the observed partial entries are not uniformly sampled from the whole matrix; therefore their algorithms are not applicable; (ii) we prove a small reconstruction error when the assumption that the column space of the pairwise label matrix is spanned by the top eigen-vectors of the pairwise similarity matrix fails. 3 The Problem and A Simple Algorithm We first describe the problem of transductive pairwise classification from pairwise similarities, and then present a simple algorithm. 3.1 Problem Definition Let Dn = {o1, . . . , on} be a set of n examples. We are given a pairwise similarity matrix denoted by S ∈Rn×n with each entry Sij measuring the similarity between oi and oj, a set of m random samples denote by bDm = {ˆo1, . . . , ˆom} ⊆Dn, and a subset of pre-determined pairwise labels being either 1 or 0 that are randomly sampled from all pairs between the examples in bDm. The problem is to recover the pairwise labels of all remaining pairs between examples in Dn. Note that the key difference between our problem and previous matrix completion problems is that the partial observed entries are only randomly distributed over bDm × bDm instead of Dn × Dn. We are interested in that the pairwise labels indicate the pairwise class relationships, i.e., the pairwise label between two examples being equal to 1 indicates they belong to the same class, and being equal to 0 indicates that they belong to different classes. We denote by r the number of underlying classes. We introduce a label matrix Z ∈{0, 1}n×n to represent the pairwise labels between all examples, and similarly denote by bZ ∈{0, 1}m×m the pairwise labels between any two labeled examples 2 in bDm. To capture the subset of pre-determined pairwise labels for the labeled data, we introduce a set Σ ⊂[m]×[m] to indicate the subset of observed entries in bZ, i.e., the pairwise label bZi,j, (i, j) ∈Σ is observed if and only if the pairwise label between ˆoi and ˆoj is pre-determined. We denote by bZΣ the partially observed label matrix, i.e. [ bZΣ]i,j = bZi,j (i, j) ∈Σ N\A (i, j) /∈Σ The goal of transductive pairwise classification from pairwise similarities is to estimate the pairwise label matrix Z ∈{0, 1}n×n for all examples in Dn using (i) the pairwise similarities in S and (ii) the partially observed label matrix bZΣ. 3.2 A Simple Algorithm In order to estimate the label matrix Z, the proposed algorithm consists of two steps. The first step is to recover the sub-matrix bZ, and the second step is to estimate the label matrix Z using the recovered bZ and the provided similarity matrix S. Recover the sub-matrix bZ First, we note that the label matrix Z and the sub-matrix bZ are of low rank by assuming that the number of hidden classes r is small. To see this, we let gk ∈ {1, 0}n, bgk ∈{1, 0}m denote the class assignments to the k-th hidden class of all data and the labeled data, respectively. It is straightforward to show that Z = r X k=1 gkg⊤ k , bZ = r X k=1 bgkbg⊤ k (1) 2The labeled examples refer to examples in bDm that serve as the bed for the pre-determined pairwise labels. 3 Algorithm 1 A Simple Algorithm for Transductive Pairwise Classification by Matrix Completion 1: Input: • S: a pairwise similarity matrix between all examples in Dn • bZΣ: the subset of observed pairwise labels for labeled examples in bDm • s < m: the number of eigenvectors used for estimating Z 2: Compute the first s eigen-vectors of a similarity matrix S // Preparation 3: Estimate bZ by solving the optimization problem in (2) // Step 1: recover the sub-matrix bZ 4: Estimate the label matrix Z using (5) // Step 2: estimate the label matrix Z 5: Output: Z which clearly indicates that both Z, bZ are of low rank if r is significantly smaller than m. As a result, we can apply the matrix completion algorithm [20] to recover bZ by solving the following optimization problem: min M∈Rm×m ∥M∥tr, s.t. Mi,j = bZi,j ∀(i, j) ∈Σ (2) where ∥M∥tr denotes the nuclear norm of a matrix. Estimate the label matrix Z The second step is to estimate the remaining entries in the label matrix Z. In the sequel, for the ease of analysis, we will attain an estimate of the full matrix Z, from which one can obtain the pairwise labels between all remaining pairs. We first describe the motivation of the second step and then present the details of computation. Assuming that there exists an orthogonal matrix Us = (u1, · · · , us) ∈Rn×s whose column space subsumes the column space of the label matrix Z where s ≥r, then there exist ak ∈Rs, k = 1, . . . , r such that gk = Usak, k = 1, . . . , r. (3) Considering the formulation of Z and bZ in (1), the second step works as follows: we first compute an estimate of Pr k=1 aka⊤ k from the completed sub-matrix bZ, then compute an estimate of Z based on the estimate of Pr k=1 aka⊤ k . To this end, we construct the following optimization problems for k = 1, . . . , r: bak = arg min ∥bgk −bUsa∥2 2 = (bU ⊤ s bUs)† bU ⊤ s bgk (4) where bUs ∈Rm×s is a sub-matrix of Us ∈Rn×s with the row indices corresponding to the global indices of the labeled examples in bDm with respect to Dn. Then we can estimate Pr k=1 aka⊤ k and Z by r X k=1 aka⊤ k = (bU ⊤ s bUs)† bU ⊤ s r X k=1 bgkbg⊤ k bUs(bU ⊤ s bUs)† = (bU ⊤ s bUs)† bU ⊤ s bZ bUs(bU ⊤ s bUs)† Z′ = r X k=1 gkg⊤ k = Us r X k=1 aka⊤ k ! U ⊤ s = Us(bU ⊤ s bUs)† bU ⊤ s bZ bUs(bU ⊤ s bUs)†U ⊤ s (5) In oder to complete the algorithm, we need to answer how to construct the orthogonal matrix Us = (u1, · · · , us). Inspired by previous studies on spectral clustering [18, 19], we can construct Us as the first s eigen-vectors that correspond to the s largest eigen-values of the provided similarity matrix. A justification of the practice is that if the similarity graph induced by a similarity matrix has r connected components, then the eigen-space of the similarity matrix corresponding to the r largest eigen-values is spanned by the indicator vectors of the components. Ideally, if the similarity graph is equivalent to the label matrix Z, then the indicator vectors of connected components are exactly g1, · · · , gr. Finally, we present the detailed step of the proposed algorithm in Algorithm 1. Remarks on the Algorithm The performance of the proposed algorithm will reply on two factors. First, how accurate is the recovered the sub-matrix bZ by matrix completion. According to our later analysis, as long as the number of observed entries is sufficiently large (e.g., |Σ| ≥Ω(m log2 m), one can exactly recover the sub-matrix bZ. Second, how well the top eigen-space of S covers the 4 column space of the label matrix Z. As shown in section 4, if they are close enough, the estimated matrix of Z has a small error provided the number of labeled examples m is sufficiently large (e.g., m ≥Ω(µss log s), where µs is a coherence measure of the top eigen-space of S. It is interesting to compare the proposed algorithm to the spectral clustering algorithm [19] and the spectral kernel learning algorithm [13], since all three algorithms exploit the top eigen-vectors of a similarity matrix. The spectral clustering algorithm employes a k-means algorithm to cluster the top eigen-vector matrix. The spectral kernel learning algorithm optimizes a diagonal matrix Λ = diag(λ1, · · · , λs) to learn a kernel matrix K = UsΛU ⊤ s by maximizing the kernel alignment with the pre-determined labels. In contrast, we estimate the pairwise label matrix by Z′ = UsMU ⊤ s where the matrix M is learned from the recovered sub-matrix bZ and the provided similarity matrix S. The recovered sub-matrix bZ serves as supervised information and the similarity matrix S serves as the input data for estimating the label matrix Z (c.f. equation 4). It is the first step that explores the low rank structure of bZ we are able to gain more useful information for the estimation in the second step. In our experiments, we observe improved performance of the proposed algorithm compared with the spectral clustering and the spectral kernel learning algorithm. 4 Theoretical Results In this section, we present theoretical results regarding the reconstruction error of the proposed algorithm, which essentially answer the question of what property the similarity matrix should satisfy, how many labeled data and how many pre-determined pairwise labels are required for a good or perfect recovery of the label matrix Z. Before stating the theoretical results, we first introduce some notations. Let pi denote the percentage of all examples in Dn that belongs to the i-th class. To facilitate our presentation and analysis, we also introduce a coherence measure µs of the orthogonal matrix Us = (u1, · · · , us) ∈Rn×s as defined by µs = n s max 1≤i≤n s X j=1 U 2 ij (6) The coherence measure has been exploited in many studies of matrix completion [29, 26], matrix reconstruction [23, 10]. It is notable that [4] defined a coherence measure of a complete orthogonal matrix U = (u1, · · · , un) ∈Rn×n by µ = √n max1≤i≤n,1≤j≤n |Uij|. It is not difficult to see that µs ≤µ2 ≤n. The coherence measure in (6) is also known as the largest statistical leverage score. Drineas et al. [8] proposed a fast approximation algorithm to compute the coherence of an arbitrary matrix. Intuitively, the coherence measures the degree to which the eigenvectors in Us or U are correlated with the canonical bases. The purpose of introducing the coherence measure is to quantify how large the sampled labeled examples m is in order to guarantee the sub-matrix bUs ∈Rm×s has full column rank. We defer the detailed statement to the supplementary material. We begin with the recovery of the sub-matrix bZ. The theorem below states if the the distribution of the data over the r hidden classes is not skewed, then an Ω(r2m log2 m) number of pairwise labels between the labeled examples is enough for a perfect recovery of the sub-matrix bZ. Theorem 1. Suppose the entries at (i, j) ∈Σ are sampled uniformly at random from [m] × [m], and the examples in bDm are sampled uniformly at random from Dn. Then with a probability at least 1 −Pr i=1 exp(−mpi/8) −2m−2, bZ is the unique solution to (2) if |Σ| ≥ 512 min 1≤i≤r p2 i m log2(2m). Next, we present a theorem stating that if the column space of Z is spanned by the orthogonal vectors u1, · · · , us and m ≥Ω(µss ln(m2s)), the estimated matrix Z′ is equal to the underlying true matrix Z. Theorem 2. Suppose the entries at (i, j) ∈Σ are sampled uniformly at random from [m]×[m], and the objects in bDm are sampled uniformly at random from Dn. If the column space of Z is spanned by u1, · · · , us, m ≥8µss log(m2s), and |Σ| ≥ 512 min 1≤i≤r p2 i m log2(2m), then with a probability at least 1 −Pr i=1 exp (−mpi/8) −3m−2, we have Z′ = Z, where Z′ is computed by (5). 5 Similar to other matrix reconstruction algorithms [4, 29, 26, 23, 10], the theorem above indicates that a low coherence measure µs plays a pivotal role in the success of the proposed algorithm. Actually, several previous works [23, 11] as well as our experiments have studied the coherence measure of real data sets and demonstrated that it is not rare to have an incoherent similarity matrix, i.e., with a small coherence measure. We now consider a more realistic scenario where some of the column vectors of Z do not lie in the subspace spanned by the top s eigen-vectors of the similarity matrix. To quantify the gap between the column space of Z and the top eigen-space of the pairwise similarity matrix, we define the following quantity ε = Pr k=1 ∥gk −PUSgk∥2 2, where PUs = UsU ⊤ s is the projection matrix that projects a vector to the space spanned by the columns of Us. The following theorem shows that if ε is small, so is the solution Z′ given in (5). Theorem 3. Suppose the entries at (i, j) ∈Σ are sampled uniformly at random from [m] × [m], and the objects in bDm are sampled uniformly at random from Dn. If the conditions on m and |Σ| in Theorem 2 are satisfied. , then, with a probability at least 1 −Pr i=1 exp (−mpi) −3m−2, we have ∥Z′ −Z∥F ≤ε 1 + 2n m + 2 √ 2n √mε ! ≤O nε m + n√ε √m Sketch of Proofs Before ending this section, we present a sketch of proofs. The details are deferred to the supplementary material. The proof of Theorem 1 relies on a matrix completion theory by Recht [20], which can guarantee the perfect recovery of the low rank matrix bZ provided the number of observed entries is sufficiently enough. The key to the proof is to show that the coherence measure of the sub-matrix bZ is bounded using the concentration inequality. To prove Theorem 2, we resort to convex optimization theory and Lemma 1 in [10], which shows that the sub-sampled matrix bUs ∈Rm×s has a full column rank if m ≥Ω(µss log(s)). Since Z = Us P⊤ k=1 aka⊤ k U ⊤ s and Z′ = Us P⊤ k=1 bakba⊤ k U ⊤ s , therefore to prove Z′ = Z is equivalent to show bak = ak, k ∈[r], i.e., ak, k ∈[r] are the unique minimizers of problems in (4). It is sufficient to show the optimization problems in (4) are strictly convex, which follows immediately from that bU ⊤ s bUs is a full rank PSD matrix with a high probability. The proof of Theorem 3 is more involved. The crux of the proof is to consider gk = g⊥ k + g∥ k, where g∥ k = PUsgk is the orthogonal projection of gk into the subspace spanned by u1, . . . , us and g⊥ k = gk −g∥ k, and then bound ∥Z −Z′∥F ≤∥Z −Z∗∥F +∥Z′−Z∗∥F , where Z∗= P k g∥ k ⊤g∥ k. 5 Experimental Results In this section, we present an empirical evaluation of our proposed simple algorithm for Transductive Pairwise Classification by Matrix Completion (TPCMC for short) on one synthetic data set and three real-world data sets. 5.1 Synthetic Data We first generate a synthetic data set of 1000 examples evenly distributed over 4 classes, each of which contains 250 data points. Then we generate a pairwise similarity matrix S by first constructing a pairwise label matrix Z ∈{0, 1}1000×1000, and then adding a noise term δij to Zij where δij ∈ (0, 0.5) follows a uniform distribution. We use S as the input pairwise similarity matrix of our proposed algorithm. The coherence measure of the top eigen-vectors of S is a small value as shown in Figure 1. According to the random perturbation matrix theory [22], the top eigen-space of S is close to the column space of the label matrix Z. We choose s = 20, which yields roughly µs = 2. We randomly select m = 4sµs = 160 data to form bDm, out of with |Σ| = 2mr2 = 5120 entries of the 160 × 160 sub-matrix are fed into the algorithm. In other words, roughly 0.5% entries out of the whole pairwise label matrix Z ∈{0, 1}1000×1000 are observed. We show the ground-truth pairwise label matrix, the similarity matrix and the estimated label matrix in Figure 1, which clearly demonstrates that the recovered label matrix is more accurate than the perturbed similarities. 6 0 20 40 60 80 100 1 1.5 2 2.5 3 s µs Figure 1: from left to right: µs vs s, the true pairwise label matrix, the perturbed similarity matrix, the recovered pairwise label matrix. The error of the estimated matrix is reduced by two times ∥Z −Z′∥F /∥Z −S∥F = 0.5. 5.2 Real Data We further evaluate the performance of our algorithm on three real-world data sets: splice [24] 3, gisette [12] 4 and citeseer [21] 5. The splice is a DNA sequence data set for recognizing the splice junctions. The gisette is a perturbed image data for handwritten digit recognition, which is originally constructed for feature selection. The citeseer is a paper citation data, which has been used for link prediction. We emphasize that we do not intend these data sets to be comprehensive but instead to be illustrative case studies that are representative of a much wider range of applications. The statistics of the three data sets are summarized in Table 1. Given a data set of size n, we randomly choose m = 20%n, 30%n, . . . , 90%n examples, where 10% entries of the m×m label matrix are observed. We design the experiments in this way since according to Theorem 1, the number of observed entries |Σ| increase as m increases. For each given m, we repeat the experiments ten times with random selections and report the performance scores averaged over the ten trials. We construct a similarity matrix S with each entry being equal to the cosine similarity of two examples based on their feature vectors. We set s = 50 in our algorithm and other algorithms as well. The corresponding coherence measures µs of the three data sets are shown in the last column of Table 1. We compare with two state-of-the-art algorithms that utilize the pre-determined pairwise labels and the provided similarity matrix in different way (c.f. the discussion at the end of Section 3), i.e., Spectral Clustering (SC) [19] and Spectral Kernel Learning (SKL) [13] for the task of clustering. To attain a clustering from the proposed algorithm, we apply a similarity-based clustering algorithm to group the data into clusters based on the estimated label matrix. Here we use spectral clustering [19] for simplicity and fair comparison. For SC, to utilize the pre-determined pairwise labels we substitute the entries corresponding to the observed pairs by 1 if the two examples are known to be in the same class and 0 if the two examples are determined to belong to different classes. For SKL, we also apply the spectral clustering algorithm to cluster the data based on the learned kernel matrix. The comparison to SC and SKL can verify the effectiveness of the proposed algorithm for exploring the pre-determined labels and the provided similarities. After obtaining the clusters, we calculate three well-known metrics, namely normalized mutual information [9], pairwise F-measure [27] and accuracy [25] that measure the degree to which the obtained clusters match the groundtruth. Figures 2∼4 show the performance of different algorithms on the three data sets, respectively. First, the performance of all the three algorithms generally improves as the ratio of m/n increases, which is consistent with our theoretical result in Theorem 3. Second, our proposed TPCMC performs the best on all the cases measured by all the three evaluation metrics, verifying its reliable performance. SKL generally performs better than SC, indicating that simply using the observed pairwise labels to directly modify the similarity matrix cannot fully utilize the label information. TPCMC is better than SKL meaning that the proposed algorithm is more effective in mining the knowledge from the pre-determined labels and the similarity matrix. 6 Conclusions In this paper, we have presented a simple algorithm for transductive pairwise classification from pairwise similarities based on matrix completion and matrix products. The algorithm consists of two 3http://www.cs.toronto.edu/˜delve/data/datasets.html 4http://www.nipsfsc.ecs.soton.ac.uk/datasets/ 5http://www.cs.umd.edu/projects/linqs/projects/lbc/ 7 Table 1: Statistics of the data sets name # examples # classes coherence (µ50) splice 3175 2 1.97 gisette 7000 2 4.17 citeseer 3312 6 2.22 20 30 40 50 60 70 80 90 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 m/n × 100% Normalized Mutual Information SKL SC TPCMC 20 30 40 50 60 70 80 90 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 m/n × 100% Accuracy SKL SC TPCMC 20 30 40 50 60 70 80 90 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 m/n × 100% Pairwise F−measure SKL SC TPCMC Figure 2: Performance on the splice data set. 20 30 40 50 60 70 80 90 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 m/n × 100% Normalized Mutual Information SKL SC TPCMC 20 30 40 50 60 70 80 90 0.75 0.8 0.85 0.9 0.95 1 m/n × 100% Accuracy SC TPCMC SKL 20 30 40 50 60 70 80 90 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 m/n × 100% Pairwise F−measure SKL SC TPCMC Figure 3: Performance on the gisette data set. 20 30 40 50 60 70 80 90 0.2 0.3 0.4 0.5 0.6 0.7 0.8 m/n × 100% Normalized Mutual Information SKL SC TPCMC 20 30 40 50 60 70 80 90 0.4 0.5 0.6 0.7 0.8 0.9 1 m/n × 100% Accuracy SKL SC TPCMC 20 30 40 50 60 70 80 90 0.4 0.5 0.6 0.7 0.8 0.9 1 m/n × 100% Pairwise F−measure SKL SC TPCMC Figure 4: Performance on the citeseer data set. simple steps: recovering the sub-matrix of pairwise labels given partially pre-determined pairwise labels and estimating the full label matrix from the recovered sub-matrix and the provided pairwise similarities. The theoretical analysis establishes the conditions on the similarity matrix, the number of labeled examples and the number of pre-determined pairwise labels under which the estimated pairwise label matrix by the proposed algorithm recovers the true one exactly or with a small error with an overwhelming probability. Preliminary empirical evaluations have verified the potential of the proposed algorithm. Ackowledgement The work of Rong Jin was supported in part by National Science Foundation (IIS-1251031) and Office of Naval Research (N000141210431). 8 References [1] N. Ailon. An active learning algorithm for ranking from pairwise preferences with an almost optimal query complexity. JMLR, 13:137–164, 2012. [2] S. Basu, M. Bilenko, and R. J. Mooney. A probabilistic framework for semi-supervised clustering. In Proceedings of SIGKDD, pages 59–68, 2004. [3] E. J. Cand`es and B. Recht. Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 9(6):717–772, 2009. [4] E. J. Cand`es and T. Tao. The power of convex relaxation: Near-optimal matrix completion. IEEE Trans. Inf. Theor., 56:2053–2080, 2010. [5] S. Chopra, R. Hadsell, and Y. LeCun. Learning a similarity metric discriminatively, with application to face verification. In Proceedings of CVPR, pages 539–546, 2005. [6] J. V. Davis, B. Kulis, P. Jain, S. Sra, and I. S. Dhillon. Information-theoretic metric learning. In Proceedings of ICML, pages 209–216, 2007. [7] I. S. Dhillon, Y. Guan, and B. Kulis. Kernel k-means: spectral clustering and normalized cuts. In Proceedings of SIGKDD, pages 551–556, 2004. [8] P. Drineas, M. Magdon-Ismail, M. W. Mahoney, and D. P. Woodruff. Fast approximation of matrix coherence and statistical leverage. In Proceedings of ICML, 2012. [9] A. Fred and A. Jain. Robust data clustering. In Proceedings of IEEE CVPR, volume 2, 2003. [10] A. Gittens. The spectral norm errors of the naive nystrom extension. CoRR, abs/1110.5305, 2011. [11] A. Gittens and M. W. Mahoney. Revisiting the nystrom method for improved large-scale machine learning. CoRR, abs/1303.1849, 2013. [12] I. Guyon, S. R. Gunn, A. Ben-Hur, and G. Dror. Result analysis of the nips 2003 feature selection challenge. In NIPS, 2004. [13] S. C. H. Hoi, M. R. Lyu, and E. Y. Chang. Learning the unified kernel machines for classification. In Proceedings of SIGKDD, pages 187–196, 2006. [14] E. H¨ullermeier and J. F¨urnkranz. Learning from label preferences. In Proceedings of ALT, page 38, 2011. [15] R. Jin, T. Yang, M. Mahdavi, Y.-F. Li, and Z.-H. Zhou. Improved bounds for the nystr¨om method with application to kernel classification. IEEE Transactions on Information Theory, 59(10):6939–6949, 2013. [16] A. Kelil, S. Wang, R. Brzezinski, and A. Fleury. Cluss: Clustering of protein sequences based on a new similarity measure. BMC Bioinformatics, 8, 2007. [17] D. Liben-Nowell and J. Kleinberg. The link-prediction problem for social networks. J. Am. Soc. Inf. Sci. Technol., 58:1019–1031, 2007. [18] U. Luxburg. A tutorial on spectral clustering. Statistics and Computing, 17:395–416, 2007. [19] A. Y. Ng, M. I. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. In NIPS, pages 849–856, 2001. [20] B. Recht. A simpler approach to matrix completion. JMLR, 12:3413–3430, 2011. [21] P. Sen, G. M. Namata, M. Bilgic, L. Getoor, B. Gallagher, and T. Eliassi-Rad. Collective classification in network data. AI Magazine, 29(3):93–106, 2008. [22] G. W. Stewart and J. guang Sun. Matrix Perturbation Theory. Academic Press, 1990. [23] A. Talwalkar and A. Rostamizadeh. Matrix coherence and the nystrom method. In Proceedings of UAI, pages 572–579, 2010. [24] G. G. Towell and J. W. Shavlik. Interpretation of artificial neural networks: Mapping knowledge-based neural networks into rules. In NIPS, pages 977–984, 1991. [25] E. P. Xing, A. Y. Ng, M. I. Jordan, and S. Russell. Distance metric learning, with application to clustering with side-information. In NIPS, volume 15, pages 505–512, 2002. [26] M. Xu, R. Jin, and Z.-H. Zhou. Speedup matrix completion with side information: Application to multilabel learning. In NIPS, pages 2301–2309, 2013. [27] T. Yang, R. Jin, Y. Chi, and S. Zhu. Combining link and content for community detection: a discriminative approach. In Proceedings of SIGKDD, pages 927–936, 2009. [28] T. Yang, Y. Li, M. Mahdavi, R. Jin, and Z. Zhou. Nystr¨om method vs random fourier features: A theoretical and empirical comparison. In NIPS, pages 485–493, 2012. [29] J. Yi, L. Zhang, R. Jin, Q. Qian, and A. K. Jain. Semi-supervised clustering by input pattern assisted pairwise similarity matrix completion. In Proceedings of ICML, pages 1400–1408, 2013. 9
|
2014
|
20
|
5,292
|
On Sparse Gaussian Chain Graph Models Calvin McCarter Machine Learning Department Carnegie Mellon University calvinm@cmu.edu Seyoung Kim Lane Center for Computational Biology Carnegie Mellon University sssykim@cs.cmu.edu Abstract In this paper, we address the problem of learning the structure of Gaussian chain graph models in a high-dimensional space. Chain graph models are generalizations of undirected and directed graphical models that contain a mixed set of directed and undirected edges. While the problem of sparse structure learning has been studied extensively for Gaussian graphical models and more recently for conditional Gaussian graphical models (CGGMs), there has been little previous work on the structure recovery of Gaussian chain graph models. We consider linear regression models and a re-parameterization of the linear regression models using CGGMs as building blocks of chain graph models. We argue that when the goal is to recover model structures, there are many advantages of using CGGMs as chain component models over linear regression models, including convexity of the optimization problem, computational efficiency, recovery of structured sparsity, and ability to leverage the model structure for semi-supervised learning. We demonstrate our approach on simulated and genomic datasets. 1 Introduction Probabilistic graphical models have been extensively studied as a powerful tool for modeling a set of conditional independencies in a probability distribution [12]. In this paper, we are concerned with a class of graphical models, called chain graph models, that has been proposed as a generalization of undirected graphical models and directed acyclic graphical models [4, 9, 14]. Chain graph models are defined over chain graphs that contain a mixed set of directed and undirected edges but no partially directed cycles. In particular, we study the problem of learning the structure of Gaussian chain graph models in a high-dimensional setting. While the problem of learning sparse structures from high-dimensional data has been studied extensively for other related models such as Gaussian graphical models (GGMs) [8] and more recently conditional Gaussian graphical models (CGGMs) [17, 20], to our knowledge, there is little previous work that addresses this problem for Gaussian chain graph models. Even with a known chain graph structure, current methods for parameter estimation are hindered by the presence of multiple locally optimal solutions [1, 7, 21]. Since the seminal work on conditional random fields (CRFs) [13], a general recipe for constructing chain graph models [12] has been given as using CRFs as building blocks for the model. We employ this construction for Gaussian chain graph models and propose to use the recently-introduced sparse CGGMs [17, 20] as a Gaussian equivalent of general CRFs. When the goal is to learn the model structure, we show that this construction is superior to the popular alternative approach of using linear regression as component models. Some of the key advantages of our approach are due to the fact that the sparse Gaussian chain graph models inherit the desirable properties of sparse CGGM such as convexity of the optimization problem and structured output prediction. In fact, our work is the first to introduce a joint estimation procedure for both the graph structure and parameters as a convex optimization problem, given the groups of variables for chain components. Another advan1 j j x3 x2 j x1 j j x3 x2 j x1 j j x3 x2 j x1 ........ R j j x3 x2 j j x4 x1 j j x3 x2 j j x4 x1 j j x3 x2 j j x4 x1 ........ R ........ (a) (b) (c) (d) (e) (f) Figure 1: Illustration of chain graph models. (a) A chain graph with two components, {x1, x2} and {x3}. (b) The moralized graph of the chain graph in (a). (c) After inference in the chain graph in (a), inferred indirect dependencies are shown as the dotted line. (d) A chain graph with three components, {x1, x2}, {x3}, and {x4}. (e) The moralized graph of the chain graph in (d). (f) After inference in the chain graph in (d), inferred indirect dependencies are shown as the dotted lines. tage of our approach is the ability to model a functional mapping from multiple related variables to other multiple related variables in a more natural way via moralization in chain graphs than other approaches that rely on complex penalty functions for inducing structured sparsity [11, 15]. Our work on sparse Gaussian chain graphs is motivated by problems in integrative genomic data analyses [6, 18]. While sparse GGMs have been extremely popular for learning networks from datasets of single modality such as gene-expression levels [8], we propose that sparse Gaussian chain graph models with CGGM components can be used to learn a cascade of networks by integrating multiple types of genomic data in a single statistical analysis. We show that our approach can reveal the module structures as well as the functional mapping between modules in different types of genomic data effectively. Furthermore, as the cost of collecting each data type differs, we show that semi-supervised learning can be used to make effective use of both fully-observed and partiallyobserved data. 2 Sparse Gaussian Chain Graph Models We consider a chain graph model for a probability distribution over J random variables x = {x1, . . . , xJ}. The chain graph model assumes that the random variables are partitioned into C chain components {x1, . . . , xC}, the τth component having size |τ|. In addition, it assumes a partially directed graph structure, where edges between variables within each chain component are undirected and edges across two chain components are directed. Given this chain graph structure, the joint probability distribution factorizes as follows: p(x) = C Y τ=1 p(xτ|xpa(τ)), where xpa(τ) is the set of variables that are parents of one or more variables in xτ. Each factor p(xτ|xpa(τ)) models the conditional distribution of the chain component variables xτ given xpa(τ). This model can also be viewed as being constructed with CRFs for p(xτ|xpa(τ))’s [13]. The conditional independence properties of undirected and directed graphical models have been extended to chain graph models [9, 14]. This can be easily seen by first constructing a moralized graph, where undirected edges are added between any pairs of nodes in xpa(τ) for each chain component τ and all the directed edges are converted into undirected edges (Figure 1). Then, subsets of variables xa and xb are conditionally independent given xc, if xa and xb are separated by xc in the moralized graph. This conditional independence criterion for a chain graph is called c-separation and generalizes d-separation for Bayesian networks [12]. In this paper, we focus on Gaussian chain graph models, where both p(x) and p(xτ|xpa(τ))’s are Gaussian distributed. Below, we review linear regression models and CGGMs as chain component models, and introduce our approach for learning chain graph model structures. 2.1 Sparse Linear Regression as Chain Component Model As the specific functional form of p(xτ|xpa(τ)) in Gaussian chain graphs models, a linear regression model with multivariate responses has been widely considered [2, 3, 7]: p(xτ|xpa(τ)) = N(Bτxpa(τ), Θ−1 τ ), (1) where Bτ ∈R|τ|×|pa(τ)| is the matrix of regression coefficients and Θτ is the |τ| × |τ| inverse covariance matrix that models correlated noise. Then, the non-zero elements in Bτ indicate the 2 presence of directed edges from xpa(τ) to xτ, and the non-zero elements in Θτ correspond to the undirected edges among the variables in xτ. When the graph structure is known, an iterative procedure has been proposed to estimate the model parameters, but it converges only to one of many locally-optimal solutions [7]. When the chain component model has the form of Eq. (1), in order to jointly estimate the sparse graph structure and the parameters, we adopt sparse multivariate regression with covariance estimation (MRCE) [16] for each chain component and solve the following optimization problem: min C X τ=1 tr((Xτ −Xpa(τ)BT τ )Θτ(Xτ −Xpa(τ)BT τ )T )−N log |Θτ| +λ C X τ=1 ||Bτ||1 + γ C X τ=1 ||Θτ||1, where Xα ∈RN×|α| is a dataset for N samples, || · ||1 is the sparsity-inducing L1 penalty, and λ and γ are the regularization parameters that control the amount of sparsity in the parameters. As in MRCE [16], the problem above is not convex, but only bi-convex. 2.2 Sparse Conditional Gaussian Graphical Model as Chain Component Model As an alternative model for p(xτ|xpa(τ)) in Gaussian chain graph models, a re-parameterization of the linear regression model in Eq. (1) with natural parameters has been considered [14]. This model also has been called a CGGM [17] or Gaussian CRF [20] due to its equivalence to a CRF. A CGGM for p(xτ|xpa(τ)) takes the standard form of undirected graphical models as a log-linear model: p(xτ|xpa(τ)) = exp −1 2xT τ Θτxτ −xT τ Θτ,pa(τ)xpa(τ) /A(xpa(τ)), (2) where Θτ ∈R|τ|×|τ| and Θτ,pa(τ) ∈R|τ|×|pa(τ)| are the parameters for the feature weights between pairs of variables within xτ and between pairs of variables across xτ and xpa(τ), respectively, and A(xpa(τ)) is the normalization constant. The non-zero elements of Θτ and Θτ,pa(τ) indicate edges among the variables in xτ and between xτ and xpa(τ), respectively. The linear regression model in Eq. (1) can be viewed as the result of performing inference in the probabilistic graphical model given by the CGGM in Eq. (2). This relationship between the two models can be seen by re-writing Eq. (2) in the form of a Gaussian distribution: p(xτ|xpa(τ)) = N(−Θ−1 τ Θτ,pa(τ)xpa(τ), Θ−1 τ ), (3) where marginalization in a CGGM involves computing Bτxpa(τ) = −Θ−1 τ Θτ,pa(τ)xpa(τ) to obtain a linear regression model parameterized by Bτ. In order to estimate the graph structure and parameters for Gaussian chain graph models with CGGMs as chain component models, we adopt the procedure for learning a sparse CGGM [17, 20] and minimize the negative log-likelihood of data along with sparsity-inducing L1 penalty: min −L(X; Θ) + λ C X τ=1 ||Θτ,pa(τ)||1 + γ C X τ=1 ||Θτ||1, where Θ = {Θτ, Θτ,pa(τ), τ = 1, . . . , C} and L(X; Θ) is the data log-likelihood for dataset X ∈ RN×J for N samples. Unlike MRCE, the optimization problem for a sparse CGGM is convex, and efficient algorithms have been developed to find the globally-optimal solution with substantially lower computation time than that for MRCE [17, 20]. While maximum likelihood estimation leads to the equivalent parameter estimates for CGGMs and linear regression models via the transformation Bτ = −Θ−1 τ Θτ,pa(τ), imposing a sparsity constraint on each model leads to different estimates for the sparsity pattern of the parameters and the model structure [17]. The graph structure of a sparse CGGM directly encodes the probabilistic dependencies among the variables, whereas the sparsity pattern of Bτ = −Θ−1 τ Θτ,pa(τ) obtained after marginalization can be interpreted as indirect influence of covariates xpa(τ) on responses xτ. As illustrated in Figures 1(c) and 1(f), the CGGM parameters Θτ,pa(τ) (directed edges with solid line) can be interpreted as direct dependencies between pairs of variables across xτ and xpa(τ), whereas Bτ = −Θ−1 τ Θτ,pa(τ) obtained from inference can be viewed as indirect and inferred dependencies (directed edges with dotted line). 3 We argue in this paper that when the goal is to learn the model structure, performing the estimation with CGGMs for chain component models can lead to a more meaningful representation of the underlying structure in data than imposing a sparsity constraint on linear regresssion models. Then the corresponding linear regression model can be inferred via marginalization. This approach also inherits many of the advantages of sparse CGGMs such as convexity of optimization problem. 2.3 Markov Properties and Chain Component Models When a CGGM is used as the component model, the overall chain graph model is known to have Lauritzen-Wermuth-Frydenberg (LWF) Markov properties [9]. The LWF Markov properties also correspond to the standard probabilistic independencies in more general chain graphs constructed by using CRFs as building blocks [12]. Many previous works have noted that LWF Markov properties do not hold for the chain graph models with linear regression models [2, 3]. The alternative Markov properties (AMP) were therefore introduced as the set of probabilistic independencies associated with chain graph models with linear regression component models [2, 3]. It has been shown that the LWF and AMP Markov properties are equivalent only for chain graph structures that do not contain the graph in Figure 1(a) as a subgraph [2, 3]. For example, according to the LWF Markov property, in the chain graph model in Figure 1(a), x1 ⊥x3|x2 as x1 and x3 are separated by x2 in the moralized graph in Figure 1(b). However, the corresponding AMP Markov property implies a different probabilistic independence relationship, x1 ⊥x3. In the model in Figure 1(d), according to the LWF Markov property, we have x1 ⊥x3|{x2, x4}, whereas the AMP Markov property gives x1 ⊥x3|x4. We observe that when using sparse CGGMs as chain component models, we estimate a model with the LWF Markov properties and perform marginalization in this model to obtain a model with linearregression chain components that can be interpreted with the AMP Markov properties. 3 Sparse Two-Layer Gaussian Chain Graph Models for Structured Sparsity Another advantage of using CGGMs as chain component models instead of linear regression is that the moralized graph, which is used to define the LWF Markov properties, can be leveraged to discover the underlying structure in a correlated functional mapping from multiple inputs to multiple outputs. In this section, we show that a sparse two-layer Gaussian chain graph model with CGGM components can be used to learn structured sparsity. The key idea behind our approach is that while inference in CGGMs within the chain graph model can reveal the shared sparsity patterns for multiple related outputs, a moralization of the chain graph can reveal those for multiple inputs. Statistical methods for learning models with structured sparsity were extensively studied in the literature of multi-task learning, where the goal is to find input features that influence multiple related outputs simultaneously [5, 11, 15]. Most of the previous works assumed the output structure to be known a priori. Then, they constructed complex penalty functions that leverage this known output structure, in order to induce structured sparsity pattern in the estimated parameters in linear regression models. In contrast, a sparse CGGM was proposed as an approach for performing a joint estimation of the output structure and structured sparsity for multi-task learning. As was discussed in Section 2.2, once the CGGM structure is estimated, the inputs relevant for multiple related outputs could be revealed via probabilistic inference in the graphical model. While sparse CGGMs focused on leveraging the output structure for improved predictions, another aspect of learning structured sparsity is to consider the input structure to discover multiple related inputs jointly influencing an output. As CGGM is a discriminative model that does not model the input distribution, it is unable to capture input relatedness directly, although discriminative models in general are known to improve prediction accuracy. We address this limitation of CGGMs by embedding CGGMs within a chain graph and examining the moralized graph. We set up a two-layer Gaussian chain graph model for inputs x and outputs y as follows: p(y, x) = p(y|x)p(x) = exp(−1 2yT Θyyy −xT Θxyy)/A1(x) exp(−1 2xT Θxxx)/A2 , where a CGGM is used for p(y|x) and a GGM for p(x), and A1(x) and A2 are normalization constants. As the full model factorizes into two factors p(y|x) and p(x) with distinct sets of parameters, 4 a sparse graph structure and parameters can be learned by using the optimization methods for sparse CGGM [20] and sparse GGM [8, 10]. The estimated Gaussian chain graph model leads to a GGM over both the inputs and outputs, which reveals the structure of the moralized graph: p(y, x) = N 0, Θyy ΘT xy Θxy Θxx + ΘxyΘ−1 yyΘT xy −1 ! . In the above GGM, we notice that the graph structure over inputs x consists of two components, one for Θxx describing the conditional dependencies within the input variables and another for ΘxyΘ−1 yyΘT xy that reflects the results of moralization in the chain graph. If the graph Θyy contains connected components, the operation ΘxyΘ−1 yyΘT xy for moralization induces edges among those inputs influencing the outputs in each connected component. l l l l l y1 y2 y3 y4 y5 l l l l l l x1 x2 x3 x4 x5 x6 AAK l l l l l y1 y2 y3 y4 y5 l l l l l l x1 x2 x3 x4 x5 x6 ........3 .....I .......... 3 . . . . . . . . . . . . .1 AAK. . . . . . . .Y ....... (a) (b) Figure 2: Illustration of sparse two-layer Gaussian chain graphs with CGGMs. (a) A two-layer Gaussian chain graph. (b) The results of performing inference and moralization in (a). The dotted edges correspond to indirect dependencies inferred by inference. The edges among xj’s represent the dependencies introduced by moralization. Our approach is illustrated in Figure 2. Given the model in Figure 2(a), Figure 2(b) illustrates the inferred structured sparsity for a functional mapping from multiple inputs to multiple outputs. In Figure 2(b), the dotted edges correspond to inferred indirect dependencies introduced via marginalization in the CGGM p(y|x), which reveals how each input is influencing multiple related outputs. On the other hand, the additional edges among xj’s have been introduced by moralization ΘxyΘ−1 yyΘT xy for multiple inputs jointly influencing each output. Combining the results of marginalization and moralization, the two connected components in Figure 2(b) represent the functional mapping from {x1, x2} to {y1, y2} and from {x3, x4, x5} to {y3, y4, y5}, respectively. 4 Sparse Multi-layer Gaussian Chain Graph Models In this section, we extend the two-layer Gaussian chain graph model from the previous section into a multi-layer model to model data that are naturally organized into multiple layers. Our approach is motivated by problems in integrative genomic data analysis. In order to study the genetic architecture of complex diseases, data are often collected for multiple data types, such as genotypes, gene expressions, and phenotypes for a population of individuals [6, 18]. The primary goal of such studies is to identify the genotype features that influence gene expressions, which in turn influence phenotypes. In such problems, data can be naturally organized into multiple layers, where the influence of features in each layer propagates to the next layer in sequence. In addition, it is well-known that the expressions of genes within the same functional module are correlated and influenced by the common genotype features and that the coordinated expressions of gene modules affect multiple related phenotypes jointly. These underlying structures in the genomic data can be potentially revealed by inference and moralization in sparse Gaussian chain graph models with CGGM components. In addition, we explore the use of semi-supervised learning, where the top and bottom layer data are fully observed but the middle-layer data are collected only for a subset of samples. In our application, genotype data and phenotype data are relatively easy to collect from patients’ blood samples and from observations. However, gene-expression data collection is more challenging, as invasive procedure such as surgery or biopsy is required to obtain tissue samples. 4.1 Models Given variables, x = {x1, . . . , xJ}, y = {y1, . . . , yK}, and z = {z1, . . . , zL}, at each of the three layers, we set up a three-layer Gaussian chain graph model as follows: p(z, y|x)=p(z|y)p(y|x) = exp(−1 2zT Θzzz −yT Θyzz)/C2(y) exp(−1 2yT Θyyy −xT Θxyy)/C1(x) , (4) 5 where C1(x) and C2(y) are the normalization constants. In our application, x, y, and z correspond to genotypes, gene-expression levels, and phenotypes, respectively. As the focus of such studies lies on discovering how the genotypic variability influences gene expressions and phenotypes rather than the structure in genotype features, we do not model p(x) directly. Given the estimated sparse model for Eq. (4), structured sparsity pattern can be recovered via inference and moralization. Computing Bxy = −Θ−1 yyΘT xy and Byz = −Θ−1 zz ΘT yz corresponds to performing inference to reveal how multiple related yk’s in Θyy (or zl’s in Θzz) are jointly influenced by a common set of relevant xj’s (or yk’s). On the other hand, the effects of moralization can be seen from the joint distribution p(z, y|x) derived from Eq. (4): p(z, y|x) = N(−Θ−1 (zz,yy)ΘT (yz,xy)x, Θ−1 (zz,yy)), where Θ(yz,xy) = (0J×L, Θxy) and Θ(zz,yy) = Θzz ΘT yz Θyz Θyy + ΘyzΘ−1 zz ΘT yz . Θ(zz,yy) corresponds to the undirected graphical model over z and y conditional on x after moralization. 4.2 Semi-supervised Learning Given a dataset D = {Do, Dh}, where Do = {Xo, Yo, Zo} for the fully-observed data and Dh = {Xh, Zh} for the samples with missing gene-expression levels, for semi-supervised learning, we adopt an EM algorithm that iteratively maximizes the expected log-likelihood of complete data: L(Do; Θ) + E L(Dh, Yh; Θ) , combined with L1-regularization, where L(Do; Θ) is the data log-likelihood with respect to the model in Eq. (4) and the expectation is taken with respect to: p(y|z, x) = N(µy|x,z, Σy|x,z), µy|x,z = −Σy|x,z(Θyzz + ΘT xyx) and Σy|x,z = (Θyy + ΘyzΘ−1 zz ΘT yz)−1. 5 Results In this section, we empirically demonstrate that CGGMs are more effective components for sparse Gaussian chain graph models than linear regression for various tasks, using synthetic and real-world genomic datasets. We used the sparse three-layer structure for p(z, y|x) in all our experiments. 5.1 Simulation Study In simulation study, we considered two scenarios for true models, CGGM-based and linearregression-based Gaussian chain graph models. We evaluated the performance in terms of graph structure recovery and prediction accuracy in both supervised and semi-supervised settings. In order to simulate data, we assumed the problem size of J=500, K=100, and L=50 for x, y, and z, respectively, and generated samples from known true models. Since we do not model p(x), we used an arbitrary choice of multinomial distribution to generate samples for x. The true parameters for CGGM-based simulation were set as follows. We set the graph structure in Θyy to a randomlygenerated scale-free network with a community structure [19] with six communities. The edge weights were drawn randomly from a uniform distribution [0.8, 1.2]. We then set Θyy to the graph Laplacian of this network plus small positive values along the diagonal so that Θyy is positive definite. We generated Θzz using a similar strategy, assuming four communities. Θxy was set to a sparse random matrix, where 0.4% of the elements have non-zero values drawn from a uniform distribution [-1.2,-0.8]. Θyz was generated using a similar strategy, with a sparsity level of 0.5%. We set the sparsity pattern of Θyz so that it roughly respects the functional mapping from communities in y to communities in z. Specifically, after reordering the variables in y and z by performing hierarchical clustering on each of the two networks Θyy and Θzz, the non-zero elements were selected randomly around the diagonal of Θyz. We set the true parameters for the linear-regression-based models using the same strategy as the CGGM-based simulation above for Θyy and Θzz. We set Bxy so that 50% of the variables in x have non-zero influence on five randomly chosen variables in y in one randomly chosen community in Θyy. We set Byz in a similar manner, assuming 80% of the variables in y are relevant to eight randomly-chosen variables in z from a randomly-chosen community in Θzz. 6 0 0.5 1 0 0.2 0.4 0.6 0.8 1 Recall Precision CG−semi CG LR−semi LR 0 0.5 1 0 0.2 0.4 0.6 0.8 1 Precision Recall 0 0.5 1 0 0.2 0.4 0.6 0.8 1 Precision Recall 0 0.5 1 0 0.2 0.4 0.6 0.8 1 Precision Recall 0 0.5 1 0 0.2 0.4 0.6 0.8 1 Precision Recall (a) (b) (c) (d) (e) Figure 4: Precision/recall curves for graph structure recovery in CGGM-based simulation study. (a) Θyy, (b) Θzz, (c) Bxy, (d) Byz, and (e) Θxy. (CG: CGGM-based models with supervised learning, CG-semi: CG with semi-supervised learning, LR: linear-regression-based models with supervised learning, LR-semi: LR with semi-supervised learning.) 0.2 0.4 0.6 0.8 1 CG−semi CG LR−semi LR test err 1 2 3 4 5 CG−semi CG LR−semi LR test err 0.4 0.6 0.8 1 1.2 CG−semi CG LR−semi LR test err 0.4 0.6 0.8 1 1.2 CG−semi CG LR−semi LR test err (a) (b) (c) (d) Figure 5: Prediction errors in CGGM-based simulation study. The same estimated models in Figure 4 were used to predict (a) y given x, z, (b) z given x, (c) y given x, and (d) z given y. 0 0.5 1 0 0.2 0.4 0.6 0.8 1 Recall Precision CG−semi CG LR−semi LR 0 0.5 1 0 0.2 0.4 0.6 0.8 1 Precision Recall 0 0.5 1 0 0.2 0.4 0.6 0.8 1 Precision Recall 0 0.5 1 0 0.2 0.4 0.6 0.8 1 Precision Recall (a) (b) (c) (d) Figure 6: Performance for graph structure recovery in linear-regression-based simulation study. Precision/recall curves are shown for (a) Θyy, (b) Θzz, (c) Bxy, and (d) Byz. (a) (b) (c) (d) (e) Figure 3: Illustration of the structured sparsity recovered by the model with CGGM components, simulated dataset. (a) Θzz. (b) Byz = −Θ−1 zz ΘT yz shows the effects of marginalization (white vertical bars). The effects of moralization are shown in (c) Θyy + ΘyzΘ−1 zz ΘT yz, and its decomposition into (d) Θyy and (e) ΘyzΘ−1 zz ΘT yz. Each dataset consisted of 600 samples, of which 400 and 200 samples were used as training and test sets. To select the regularization parameters, we estimated a model using 300 samples, evaluated prediction errors on the other 100 samples in the training set, and selected the values with the lowest prediction errors. We used the optimization methods in [20] for CGGMbased models and the MRCE procedure [16] for linearregression-based models. Figure 3 illustrates how the model with CGGM chain components can be used to discover the structured sparsity via inference and moralization. In each panel, black and bright pixels correspond to zero and nonzero values, respectively. While Figure 3(a) shows how variables in z are related in Θzz, Figure 3(b) shows Byz = −Θ−1 zz ΘT yz obtained via marginalization within the CGGM p(z|y), where functional mappings from variables in y to multiple related variables in z can be seen as white vertical bars. In Figure 3(c), the effects of moralization Θyy + ΘyzΘ−1 zz ΘT yz are shown, which further decomposes into Θyy (Figure 3(d)) and ΘyzΘ−1 zz ΘT yz (Figure 3(e)). The additional edges among variables in y in Figure 3(e) correspond to the edges introduced via moralization and show the groupings of the variables y as the block structure along the diagonal. By examining Figures 3(b) and 3(e), we can infer a functional mapping from modules in y to modules in z. In order to systematically compare the performance of the two types of models, we examined the average performance over 30 randomly-generated datasets. We considered both supervised and semi-supervised settings. Assuming that 200 samples out of the total 400 training samples were 7 0.5 1 1.5 2 CG−semi CG LR−semi LR test err 10 20 30 40 CG−semi CG LR−semi LR test err 0.5 1 1.5 2 CG−semi CG LR−semi LR test err 0 10 20 30 CG−semi CG LR−semi LR test err (a) (b) (c) (d) Figure 7: Prediction errors in linear-regression-based simulation study. The same estimated models in Figure 6 were used to predict (a) y given x, z, (b) z given x, (c) y given x, and (d) z given y. missing data for y, for supervised learning, we used only those samples with complete data; for semi-supervised learning, we used all samples, including partially-observed cases. The precision/recall curves for recovering the true graph structures are shown in Figure 4, using datasets simulated from the true models with CGGM components. Each curve was obtained as an average over 30 different datasets. We observe that in both supervised and semi-supervised settings, the models with CGGM components outperform the ones with linear regression components. In addition, the performance of the CGGM-based models improves significantly, when using the partially-observed data in addition to the fully-observed samples (the curve for CG-semi in Figure 4), compared to using only the fully-observed samples (the curve for CG in Figure 4). This improvement from using partially-observed data is substantially smaller for the linear-regressionbased models. The average prediction errors from the same set of estimated models in Figure 4 are shown in Figure 5. The CGGM-based models outperform in all prediction tasks, because they can leverage the underlying structure in the data and estimate models more effectively. For the simulation scenario using the linear-regression-based true models, we show the results for precision/recall curves and prediction errors in Figures 6 and 7, respectively. We find that even though the data were generated from chain graph models with linear regression components, the CGGM-based methods perform as well as or better than the other models. 5.2 Integrative Genomic Data Analysis Table 1: Prediction errors, mouse diabetes data Task CG-semi CG LR-semi LR y | x, z 0.9070 0.9996 1.0958 0.9671 z | x 1.0661 1.0585 1.0505 1.0614 y | x 0.8989 0.9382 0.9332 0.9103 z | y 1.0712 1.0861 1.1095 1.0765 We applied the two types of three-layer chain graph models to single-nucleotide-polymorphism (SNP), gene-expression, and phenotype data from the pancreatic islets study for diabetic mice [18]. We selected 200 islet gene-expression traits after performing hierarchical clustering to find several gene modules. Our dataset also included 1000 SNPs and 100 pancreatic islet cell phenotypes. Of the total 506 samples, we used 406 as training set, of which 100 were held out as a validation set to select regularization parameters, and used the remaining 100 samples as test set to evaluate prediction accuracies. We considered both supervised and semi-supervised settings, assuming gene expressions are missing for 150 mice. In supervised learning, only those samples without missing gene expressions were used. As can be seen from the prediction errors in Table 1, the models with CGGM chain components are more accurate in various prediction tasks. In addition, the CGGM-based models can more effectively leverage the samples with partially-observed data than linear-regression-based models. 6 Conclusions In this paper, we addressed the problem of learning the structure of Gaussian chain graph models in a high-dimensional space. We argued that when the goal is to recover the model structure, using sparse CGGMs as chain component models has many advantages such as recovery of structured sparsity, computational efficiency, globally-optimal solutions for parameter estimates, and superior performance in semi-supervised learning. Acknowledgements This material is based upon work supported by an NSF CAREER Award No. MCB-1149885, Sloan Research Fellowship, and Okawa Foundation Research Grant. 8 References [1] F. Abegaz and E. Wit. Sparse time series chain graphical models for reconstructing genetic networks. Biostatistics, pages 586–599, 2013. [2] S. Andersson, D. Madigan, and D. Perlman. An alternative Markov property for chain graphs. In Proceedings of the 12th Conference on Uncertainty in Artificial Intelligence, pages 40–48. Morgan Kaufmann, 1996. [3] S. Andersson, D. Madigan, and D. Perlman. Alternative Markov properties for chain graphs. Scandinavian Journal of Statistics, 28:33–85, 2001. [4] W. Buntine. Chain graphs for learning. In Proceedings of the 11th Conference on Uncertainty in Artificial Intelligence, pages 46–54. Morgan Kaufmann, 1995. [5] X. Chen, X. Shi, X. Xu, Z. Wang, R. Mills, C. Lee, and J. Xu. A two-graph guided multi-task lasso approach for eQTL mapping. In Proceedings of the 15th International Conference on Artificial Intelligence and Statistics (AISTATS), volume 16. JMLR W&CP, 2012. [6] Y. Chen, J. Zhu, P.K. Lum, X. Yang, S. Pinto, D.J. MacNeil, C. Zhang, J. Lamb, S. Edwards, S.K. Sieberts, et al. Variations in DNA elucidate molecular networks that cause disease. Nature, 452(27):429–35, 2008. [7] M. Drton and M. Eichler. Maximum likelihood estimation in Gaussian chain graph models under the alternative Markov property. Scandinavian Journal of Statistics, 33:247–57, 2006. [8] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9(3):432–41, 2008. [9] M. Frydenberg. The chain graph Markov property. Scandinavian Journal of Statistics, 17: 333–53, 1990. [10] C.J. Hsieh, M. Sustik, I. Dhillon, and P. Ravikumar. Sparse inverse covariance matrix estimation using quadratic approximation. In Advances in Neural Information Processing Systems (NIPS) 24, 2011. [11] L. Jacob, G. Obozinski, and J. Vert. Group lasso with overlap and graph lasso. In Proceedings of the 26th International Conference on Machine Learning, 2009. [12] D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques. The MIT Press, 2009. [13] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: probabilistic models for segmenting and labeling sequence data. In Proceedings of the 18th International Conference on Machine Learning, 2001. [14] S.L. Lauritzen and N. Wermuth. Graphical models for associations between variables, some of which are qualitative and some quantitative. The Annals of Statistics, 17(1):31–57, 1989. [15] G. Obozinski, M.J. Wainwright, and M.J. Jordan. High-dimensional union support recovery in multivariate regression. In Advances in Neural Information Processing Systems 21, 2008. [16] A. Rothman, E. Levina, and J. Zhu. Sparse multivariate regression with covariance estimation. Journal of Computational and Graphical Statistics, 19(4):947–962, 2010. [17] K.A. Sohn and S. Kim. Joint estimation of structured sparsity and output structure in multipleoutput regression via inverse-covariance regularization. In Proceedings of the 15th International Conference on Artificial Intelligence and Statistics (AISTATS), volume 16. JMLR W&CP, 2012. [18] Z. Tu, M.P. Keller, C. Zhang, M.E. Rabaglia, D.M. Greenawalt, X. Yang, I.M. Wang, H. Dai, M.D. Bruss, P.Y. Lum, Y.P. Zhou, D.M. Kemp, C. Kendziorski, B.S. Yandell, A.D. Attie, E.E. Schadt, and J. Zhu. Integrative analysis of a cross-loci regulation network identifies app as a gene regulating insulin secretion from pancreatic islets. PLoS Genetics, 8(12):e1003107, 2012. [19] J. Wu, Z. Gao, and H. Sun. Cascade and breakdown in scale-free networks with community structure. Physical Review, 74:066111, 2006. [20] M. Wytock and J.Z. Kolter. Sparse Gaussian conditional random fields: algorithms, theory, and application to energy forecasting. In Proceedings of the 30th International Conference on Machine Learning, volume 28. JMLR W&CP, 2013. [21] J. Yin and H. Li. A sparse conditional Gaussian graphical model for analysis of genetical genomics data. The annals of applied statistics, 5(4):2630, 2011. 9
|
2014
|
200
|
5,293
|
Convolutional Kernel Networks Julien Mairal, Piotr Koniusz, Zaid Harchaoui, and Cordelia Schmid Inria∗ firstname.lastname@inria.fr Abstract An important goal in visual recognition is to devise image representations that are invariant to particular transformations. In this paper, we address this goal with a new type of convolutional neural network (CNN) whose invariance is encoded by a reproducing kernel. Unlike traditional approaches where neural networks are learned either to represent data or for solving a classification task, our network learns to approximate the kernel feature map on training data. Such an approach enjoys several benefits over classical ones. First, by teaching CNNs to be invariant, we obtain simple network architectures that achieve a similar accuracy to more complex ones, while being easy to train and robust to overfitting. Second, we bridge a gap between the neural network literature and kernels, which are natural tools to model invariance. We evaluate our methodology on visual recognition tasks where CNNs have proven to perform well, e.g., digit recognition with the MNIST dataset, and the more challenging CIFAR-10 and STL-10 datasets, where our accuracy is competitive with the state of the art. 1 Introduction We have recently seen a revival of attention given to convolutional neural networks (CNNs) [22] due to their high performance for large-scale visual recognition tasks [15, 21, 30]. The architecture of CNNs is relatively simple and consists of successive layers organized in a hierarchical fashion; each layer involves convolutions with learned filters followed by a pointwise non-linearity and a downsampling operation called “feature pooling”. The resulting image representation has been empirically observed to be invariant to image perturbations and to encode complex visual patterns [33], which are useful properties for visual recognition. Training CNNs remains however difficult since high-capacity networks may involve billions of parameters to learn, which requires both high computational power, e.g., GPUs, and appropriate regularization techniques [18, 21, 30]. The exact nature of invariance that CNNs exhibit is also not precisely understood. Only recently, the invariance of related architectures has been characterized; this is the case for the wavelet scattering transform [8] or the hierarchical models of [7]. Our work revisits convolutional neural networks, but we adopt a significantly different approach than the traditional one. Indeed, we use kernels [26], which are natural tools to model invariance [14]. Inspired by the hierarchical kernel descriptors of [2], we propose a reproducing kernel that produces multi-layer image representations. Our main contribution is an approximation scheme called convolutional kernel network (CKN) to make the kernel approach computationally feasible. Our approach is a new type of unsupervised convolutional neural network that is trained to approximate the kernel map. Interestingly, our network uses non-linear functions that resemble rectified linear units [1, 30], even though they were not handcrafted and naturally emerge from an approximation scheme of the Gaussian kernel map. By bridging a gap between kernel methods and neural networks, we believe that we are opening a fruitful research direction for the future. Our network is learned without supervision since the ∗LEAR team, Inria Grenoble, Laboratoire Jean Kuntzmann, CNRS, Univ. Grenoble Alpes, France. 1 label information is only used subsequently in a support vector machine (SVM). Yet, we achieve competitive results on several datasets such as MNIST [22], CIFAR-10 [20] and STL-10 [13] with simple architectures, few parameters to learn, and no data augmentation. Open-source code for learning our convolutional kernel networks is available on the first author’s webpage. 1.1 Related Work There have been several attempts to build kernel-based methods that mimic deep neural networks; we only review here the ones that are most related to our approach. Arc-cosine kernels. Kernels for building deep large-margin classifiers have been introduced in [10]. The multilayer arc-cosine kernel is built by successive kernel compositions, and each layer relies on an integral representation. Similarly, our kernels rely on an integral representation, and enjoy a multilayer construction. However, in contrast to arc-cosine kernels: (i) we build our sequence of kernels by convolutions, using local information over spatial neighborhoods (as opposed to compositions, using global information); (ii) we propose a new training procedure for learning a compact representation of the kernel in a data-dependent manner. Multilayer derived kernels. Kernels with invariance properties for visual recognition have been proposed in [7]. Such kernels are built with a parameterized “neural response” function, which consists in computing the maximal response of a base kernel over a local neighborhood. Multiple layers are then built by iteratively renormalizing the response kernels and pooling using neural response functions. Learning is performed by plugging the obtained kernel in an SVM. In contrast to [7], we propagate information up, from lower to upper layers, by using sequences of convolutions. Furthermore, we propose a simple and effective data-dependent way to learn a compact representation of our kernels and show that we obtain near state-of-the-art performance on several benchmarks. Hierarchical kernel descriptors. The kernels proposed in [2, 3] produce multilayer image representations for visual recognition tasks. We discuss in details these kernels in the next section: our paper generalizes them and establishes a strong link with convolutional neural networks. 2 Convolutional Multilayer Kernels The convolutional multilayer kernel is a generalization of the hierarchical kernel descriptors introduced in computer vision [2, 3]. The kernel produces a sequence of image representations that are built on top of each other in a multilayer fashion. Each layer can be interpreted as a non-linear transformation of the previous one with additional spatial invariance. We call these layers image feature maps1, and formally define them as follows: Definition 1. An image feature map ϕ is a function ϕ : Ω→H, where Ωis a (usually discrete) subset of [0, 1]d representing normalized “coordinates” in the image and H is a Hilbert space. For all practical examples in this paper, Ωis a two-dimensional grid and corresponds to different locations in a two-dimensional image. In other words, Ωis a set of pixel coordinates. Given z in Ω, the point ϕ(z) represents some characteristics of the image at location z, or in a neighborhood of z. For instance, a color image of size m × n with three channels, red, green, and blue, may be represented by an initial feature map ϕ0 : Ω0 →H0, where Ω0 is an m × n regular grid, H0 is the Euclidean space R3, and ϕ0 provides the color pixel values. With the multilayer scheme, non-trivial feature maps will be obtained subsequently, which will encode more complex image characteristics. With this terminology in hand, we now introduce the convolutional kernel, first, for a single layer. Definition 2 (Convolutional Kernel with Single Layer). Let us consider two images represented by two image feature maps, respectively ϕ and ϕ′ : Ω→H, where Ωis a set of pixel locations, and H is a Hilbert space. The one-layer convolutional kernel between ϕ and ϕ′ is defined as K(ϕ, ϕ′) := X z∈Ω X z′∈Ω ∥ϕ(z)∥H ∥ϕ′(z′)∥H e− 1 2β2 ∥z−z′∥ 2 2e− 1 2σ2 ∥˜ϕ(z)−˜ϕ′(z′)∥ 2 H, (1) 1In the kernel literature, “feature map” denotes the mapping between data points and their representation in a reproducing kernel Hilbert space (RKHS) [26]. Here, feature maps refer to spatial maps representing local image characteristics at everly location, as usual in the neural network literature [22]. 2 where β and σ are smoothing parameters of Gaussian kernels, and ˜ϕ(z) := (1/ ∥ϕ(z)∥H) ϕ(z) if ϕ(z) ̸= 0 and ˜ϕ(z) = 0 otherwise. Similarly, ˜ϕ′(z′) is a normalized version of ϕ′(z′).2 It is easy to show that the kernel K is positive definite (see Appendix A). It consists of a sum of pairwise comparisons between the image features ϕ(z) and ϕ′(z′) computed at all spatial locations z and z′ in Ω. To be significant in the sum, a comparison needs the corresponding z and z′ to be close in Ω, and the normalized features ˜ϕ(z) and ˜ϕ′(z′) to be close in the feature space H. The parameters β and σ respectively control these two definitions of “closeness”. Indeed, when β is large, the kernel K is invariant to the positions z and z′ but when β is small, only features placed at the same location z = z′ are compared to each other. Therefore, the role of β is to control how much the kernel is locally shift-invariant. Next, we will show how to go beyond one single layer, but before that, we present concrete examples of simple input feature maps ϕ0 : Ω0 →H0. Gradient map. Assume that H0 =R2 and that ϕ0(z) provides the two-dimensional gradient of the image at pixel z, which is often computed with first-order differences along each dimension. Then, the quantity ∥ϕ0(z)∥H0 is the gradient intensity, and ˜ϕ0(z) is its orientation, which can be characterized by a particular angle—that is, there exists θ in [0; 2π] such that ˜ϕ0(z) = [cos(θ), sin(θ)]. The resulting kernel K is exactly the kernel descriptor introduced in [2, 3] for natural image patches. Patch map. In that setting, ϕ0 associates to a location z an image patch of size m × m centered at z. Then, the space H0 is simply Rm×m, and ˜ϕ0(z) is a contrast-normalized version of the patch, which is a useful transformation for visual recognition according to classical findings in computer vision [19]. When the image is encoded with three color channels, patches are of size m × m × 3. We now define the multilayer convolutional kernel, generalizing some ideas of [2]. Definition 3 (Multilayer Convolutional Kernel). Let us consider a set Ωk–1 ⊆[0, 1]d and a Hilbert space Hk–1. We build a new set Ωk and a new Hilbert space Hk as follows: (i) choose a patch shape Pk defined as a bounded symmetric subset of [−1, 1]d, and a set of coordinates Ωk such that for all location zk in Ωk, the patch {zk} + Pk is a subset of Ωk–1;3 In other words, each coordinate zk in Ωk corresponds to a valid patch in Ωk–1 centered at zk. (ii) define the convolutional kernel Kk on the “patch” feature maps Pk →Hk–1, by replacing in (1): Ωby Pk, H by Hk–1, and σ, β by appropriate smoothing parameters σk, βk. We denote by Hk the Hilbert space for which the positive definite kernel Kk is reproducing. An image represented by a feature map ϕk–1 : Ωk–1 →Hk–1 at layer k–1 is now encoded in the k-th layer as ϕk : Ωk →Hk, where for all zk in Ωk, ϕk(zk) is the representation in Hk of the patch feature map z 7→ϕk–1(zk + z) for z in Pk. Concretely, the kernel Kk between two patches of ϕk–1 and ϕ′ k–1 at respective locations zk and z′ k is X z∈Pk X z′∈Pk ∥ϕk–1(zk + z)∥∥ϕ′ k–1(z′ k + z′)∥e − 1 2β2 k ∥z−z′∥ 2 2e − 1 2σ2 k ∥˜ϕk–1(zk+z)−˜ϕ′ k–1(z′ k+z′)∥ 2 , (2) where ∥.∥is the Hilbertian norm of Hk–1. In Figure 1(a), we illustrate the interactions between the sets of coordinates Ωk, patches Pk, and feature spaces Hk across layers. For two-dimensional grids, a typical patch shape is a square, for example P := {−1/n, 0, 1/n} × {−1/n, 0, 1/n} for a 3 × 3 patch in an image of size n × n. Information encoded in the k-th layer differs from the (k–1)-th one in two aspects: first, each point ϕk(zk) in layer k contains information about several points from the (k–1)-th layer and can possibly represent larger patterns; second, the new feature map is more locally shift-invariant than the previous one due to the term involving the parameter βk in (2). The multilayer convolutional kernel slightly differs from the hierarchical kernel descriptors of [2] but exploits similar ideas. Bo et al. [2] define indeed several ad hoc kernels for representing local information in images, such as gradient, color, or shape. These kernels are close to the one defined in (1) but with a few variations. Some of them do not use normalized features ˜ϕ(z), and these kernels use different weighting strategies for the summands of (1) that are specialized to the image modality, e.g., color, or gradient, whereas we use the same weight ∥ϕ(z)∥H ∥ϕ′(z′)∥H for all kernels. The generic formulation (1) that we propose may be useful per se, but our main contribution comes in the next section, where we use the kernel as a new tool for learning convolutional neural networks. 2When Ωis not discrete, the notation P in (1) should be replaced by the Lebesgue integral R in the paper. 3For two sets A and B, the Minkowski sum A + B is defined as {a + b : a ∈A, b ∈B}. 3 Ω0 ϕ0(z0) ∈H0 {z1} + P1 ϕ1(z1) ∈H1 Ω1 {z2} + P2 Ω2 ϕ2(z2) ∈H2 (a) Hierarchy of image feature maps. Ω′ k–1 ξk–1(z) ψk–1(zk–1) (patch extraction) {zk–1}+P′ k–1 convolution + non-linearity pk ζk(zk–1) Ωk–1 Gaussian filtering + downsampling = pooling Ω′ k ξk(z) (b) Zoom between layer k–1 and k of the CKN. Figure 1: Left: concrete representation of the successive layers for the multilayer convolutional kernel. Right: one layer of the convolutional neural network that approximates the kernel. 3 Training Invariant Convolutional Kernel Networks Generic schemes have been proposed for approximating a non-linear kernel with a linear one, such as the Nystr¨om method and its variants [5, 31], or random sampling techniques in the Fourier domain for shift-invariant kernels [24]. In the context of convolutional multilayer kernels, such an approximation is critical because computing the full kernel matrix on a database of images is computationally infeasible, even for a moderate number of images (≈10 000) and moderate number of layers. For this reason, Bo et al. [2] use the Nystr¨om method for their hierarchical kernel descriptors. In this section, we show that when the coordinate sets Ωk are two-dimensional regular grids, a natural approximation for the multilayer convolutional kernel consists of a sequence of spatial convolutions with learned filters, pointwise non-linearities, and pooling operations, as illustrated in Figure 1(b). More precisely, our scheme approximates the kernel map of K defined in (1) at layer k by finite-dimensional spatial maps ξk : Ω′ k →Rpk, where Ω′ k is a set of coordinates related to Ωk, and pk is a positive integer controlling the quality of the approximation. Consider indeed two images represented at layer k by image feature maps ϕk and ϕ′ k, respectively. Then, (A) the corresponding maps ξk and ξ′ k are learned such that K(ϕk–1, ϕ′ k–1) ≈⟨ξk, ξ′ k⟩, where ⟨., .⟩ is the Euclidean inner-product acting as if ξk and ξ′ k were vectors in R|Ω′ k|pk; (B) the set Ω′ k is linked to Ωk by the relation Ω′ k = Ωk + P′ k where P′ k is a patch shape, and the quantities ϕk(zk) in Hk admit finite-dimensional approximations ψk(zk) in R|P′ k|pk; as illustrated in Figure 1(b), ψk(zk) is a patch from ξk centered at location zk with shape P′ k; (C) an activation map ζk : Ωk–1 7→Rpk is computed from ξk–1 by convolution with pk filters followed by a non-linearity. The subsequent map ξk is obtained from ζk by a pooling operation. We call this approximation scheme a convolutional kernel network (CKN). In comparison to CNNs, our approach enjoys similar benefits such as efficient prediction at test time, and involves the same set of hyper-parameters: number of layers, numbers of filters pk at layer k, shape P′ k of the filters, sizes of the feature maps. The other parameters βk, σk can be automatically chosen, as discussed later. Training a CKN can be argued to be as simple as training a CNN in an unsupervised manner [25] since we will show that the main difference is in the cost function that is optimized. 3.1 Fast Approximation of the Gaussian Kernel A key component of our formulation is the Gaussian kernel. We start by approximating it by a linear operation with learned filters followed by a pointwise non-linearity. Our starting point is the next lemma, which can be obtained after a simple calculation. 4 Lemma 1 (Linear expansion of the Gaussian Kernel). For all x and x′ in Rm, and σ > 0, e− 1 2σ2 ∥x−x′∥2 2 = 2 πσ2 m 2 Z w∈Rm e−1 σ2 ∥x−w∥2 2e−1 σ2 ∥x′−w∥2 2dw. (3) The lemma gives us a mapping of any x in Rm to the function w 7→ √ Ce−(1/σ2)∥x−w∥2 2 in L2(Rm), where the kernel is linear, and C is the constant in front of the integral. To obtain a finite-dimensional representation, we need to approximate the integral with a weighted finite sum, which is a classical problem arising in statistics (see [29] and chapter 8 of [6]). Then, we consider two different cases. Small dimension, m ≤2. When the data lives in a compact set of Rm, the integral in (3) can be approximated by uniform sampling over a large enough set. We choose such a strategy for two types of kernels from Eq. (1): (i) the spatial kernels e − 1 2β2 ∥z−z′∥ 2 2; (ii) the terms e−( 1 2σ2 )∥˜ϕ(z)−˜ϕ′(z′)∥ 2 H when ϕ is the “gradient map” presented in Section 2. In the latter case, H = R2 and ˜ϕ(z) is the gradient orientation. We typically sample a few orientations as explained in Section 4. Higher dimensions. To prevent the curse of dimensionality, we learn to approximate the kernel on training data, which is intrinsically low-dimensional. We optimize importance weights η = [ηl]p l=1 in Rp + and sampling points W = [wl]p l=1 in Rm×p on n training pairs (xi, yi)i=1,...,n in Rm × Rm: min η∈Rp +,W∈Rm×p 1 n n X i=1 e− 1 2σ2 ∥xi−yi∥2 2 − p X l=1 ηle−1 σ2 ∥xi−wl∥2 2e−1 σ2 ∥yi−wl∥2 2 2 . (4) Interestingly, we may already draw some links with neural networks. When applied to unit-norm vectors xi and yi, problem (4) produces sampling points wl whose norm is close to one. After learning, a new unit-norm point x in Rm is mapped to the vector [√ηle−(1/σ2)∥x−wl∥2 2]p l=1 in Rp, which may be written as [f(w⊤ l x)]p l=1, assuming that the norm of wl is always one, where f is the function u 7→e(2/σ2)(u−1) for u = w⊤ l x in [−1, 1]. Therefore, the finite-dimensional representation of x only involves a linear operation followed by a non-linearity, as in typical neural networks. In Figure 2, we show that the shape of f resembles the “rectified linear unit” function [30]. u f(u) f(u) = e(2/σ2)(u−1) f(u) = max(u, 0) 0 1 -1 Figure 2: In dotted red, we plot the “rectified linear unit” function u 7→max(u, 0). In blue, we plot non-linear functions of our network for typical values of σ that we use in our experiments. 3.2 Approximating the Multilayer Convolutional Kernel We have now all the tools in hand to build our convolutional kernel network. We start by making assumptions on the input data, and then present the learning scheme and its approximation principles. The zeroth layer. We assume that the input data is a finite-dimensional map ξ0 : Ω′ 0 →Rp0, and that ϕ0 : Ω0 →H0 “extracts” patches from ξ0. Formally, there exists a patch shape P′ 0 such that Ω′ 0 = Ω0 + P′ 0, H0 = Rp0|P′ 0|, and for all z0 in Ω0, ϕ0(z0) is a patch of ξ0 centered at z0. Then, property (B) described at the beginning of Section 3 is satisfied for k = 0 by choosing ψ0 = ϕ0. The examples of input feature maps given earlier satisfy this finite-dimensional assumption: for the gradient map, ξ0 is the gradient of the image along each direction, with p0 = 2, P′ 0 = {0} is a 1×1 patch, Ω0 =Ω′ 0, and ϕ0 =ξ0; for the patch map, ξ0 is the input image, say with p0 =3 for RGB data. The convolutional kernel network. The zeroth layer being characterized, we present in Algorithms 1 and 2 the subsequent layers and how to learn their parameters in a feedforward manner. It is interesting to note that the input parameters of the algorithm are exactly the same as a CNN—that is, number of layers and filters, sizes of the patches and feature maps (obtained here via the subsampling factor). Ultimately, CNNs and CKNs only differ in the cost function that is optimized for learning the filters and in the choice of non-linearities. As we show next, there exists a link between the parameters of a CKN and those of a convolutional multilayer kernel. 5 Algorithm 1 Convolutional kernel network - learning the parameters of the k-th layer. input ξ1 k–1, ξ2 k–1, . . . : Ω′ k–1 →Rpk–1 (sequence of (k–1)-th maps obtained from training images); P′ k–1 (patch shape); pk (number of filters); n (number of training pairs); 1: extract at random n pairs (xi, yi) of patches with shape P′ k–1 from the maps ξ1 k–1, ξ2 k–1, . . .; 2: if not provided by the user, set σk to the 0.1 quantile of the data (∥xi −yi∥2)n i=1; 3: unsupervised learning: optimize (4) to obtain the filters Wk in R|P′ k–1|pk–1×pk and ηk in Rpk; output Wk, ηk, and σk (smoothing parameter); Algorithm 2 Convolutional kernel network - computing the k-th map form the (k–1)-th one. input ξk–1 : Ω′ k–1 →Rpk–1 (input map); P′ k–1 (patch shape); γk ≥1 (subsampling factor); pk (number of filters); σk (smoothing parameter); Wk = [wkl]pk l=1 and ηk = [ηkl]pk l=1 (layer parameters); 1: convolution and non-linearity: define the activation map ζk : Ωk–1 →Rpk as ζk : z 7→∥ψk–1(z)∥2 √ηkle −1 σ2 k ∥˜ ψk–1(z)−wkl∥ 2 2 pk l=1 , (5) where ψk–1(z) is a vector representing a patch from ξk–1 centered at z with shape P′ k–1, and the vector ˜ψk–1(z) is an ℓ2-normalized version of ψk–1(z). This operation can be interpreted as a spatial convolution of the map ξk–1 with the filters wkl followed by pointwise non-linearities; 2: set βk to be γk times the spacing between two pixels in Ωk–1; 3: feature pooling: Ω′ k is obtained by subsampling Ωk–1 by a factor γk and we define a new map ξk : Ω′ k →Rpk obtained from ζk by linear pooling with Gaussian weights: ξk : z 7→ p 2/π X u∈Ωk–1 e −1 β2 k ∥u−z∥2 2ζk(u). (6) output ξk : Ω′ k →Rpk (new map); Approximation principles. We proceed recursively to show that the kernel approximation property (A) is satisfied; we assume that (B) holds at layer k–1, and then, we show that (A) and (B) also hold at layer k. This is sufficient for our purpose since we have previously assumed (B) for the zeroth layer. Given two images feature maps ϕk–1 and ϕ′ k–1, we start by approximating K(ϕk–1, ϕ′ k–1) by replacing ϕk–1(z) and ϕ′ k–1(z′) by their finite-dimensional approximations provided by (B): K(ϕk–1, ϕ′ k–1) ≈ X z,z′∈Ωk–1 ∥ψk–1(z)∥2 ∥ψ′ k–1(z′)∥2 e − 1 2β2 k ∥z−z′∥ 2 2e − 1 2σ2 k ∥˜ ψk–1(z)−˜ ψ′ k–1(z′)∥ 2 2. (7) Then, we use the finite-dimensional approximation of the Gaussian kernel involving σk and K(ϕk–1, ϕ′ k–1) ≈ X z,z′∈Ωk–1 ζk(z)⊤ζ′ k(z′)e − 1 2β2 k ∥z−z′∥ 2 2, (8) where ζk is defined in (5) and ζ′ k is defined similarly by replacing ˜ψ by ˜ψ′. Finally, we approximate the remaining Gaussian kernel by uniform sampling on Ω′ k, following Section 3.1. After exchanging sums and grouping appropriate terms together, we obtain the new approximation K(ϕk–1, ϕ′ k–1) ≈2 π X u∈Ω′ k X z∈Ωk–1 e −1 β2 k ∥z−u∥2 2ζk(z) ⊤ X z′∈Ωk–1 e −1 β2 k ∥z′−u∥ 2 2ζ′ k(z′) , (9) where the constant 2/π comes from the multiplication of the constant 2/(πβ2 k) from (3) and the weight β2 k of uniform sampling orresponding to the square of the distance between two pixels of Ω′ k.4 As a result, the right-hand side is exactly ⟨ξk, ξ′ k⟩, where ξk is defined in (6), giving us property (A). It remains to show that property (B) also holds, specifically that the quantity (2) can be approximated by the Euclidean inner-product ⟨ψk(zk), ψ′ k(z′ k)⟩with the patches ψk(zk) and ψ′ k(z′ k) of shape P′ k; we assume for that purpose that P′ k is a subsampled version of the patch shape Pk by a factor γk. 4The choice of βk in Algorithm 2 is driven by signal processing principles. The feature pooling step can indeed be interpreted as a downsampling operation that reduces the resolution of the map from Ωk–1 to Ωk by using a Gaussian anti-aliasing filter, whose role is to reduce frequencies above the Nyquist limit. 6 We remark that the kernel (2) is the same as (1) applied to layer k–1 by replacing Ωk–1 by {zk}+Pk. By doing the same substitution in (9), we immediately obtain an approximation of (2). Then, all Gaussian terms are negligible for all u and z that are far from each other—say when ∥u−z∥2 ≥2βk. Thus, we may replace the sums P u∈Ω′ k P z,z′∈{zk}+Pk by P u∈{zk}+P′ k P z,z′∈Ωk–1, which has the same set of “non-negligible” terms. This yields exactly the approximation ⟨ψk(zk), ψ′ k(z′ k)⟩. Optimization. Regarding problem (4), stochastic gradient descent (SGD) may be used since a potentially infinite amount of training data is available. However, we have preferred to use L-BFGSB [9] on 300 000 pairs of randomly selected training data points, and initialize W with the K-means algorithm. L-BFGS-B is a parameter-free state-of-the-art batch method, which is not as fast as SGD but much easier to use. We always run the L-BFGS-B algorithm for 4 000 iterations, which seems to ensure convergence to a stationary point. Our goal is to demonstrate the preliminary performance of a new type of convolutional network, and we leave as future work any speed improvement. 4 Experiments We now present experiments that were performed using Matlab and an L-BFGS-B solver [9] interfaced by Stephen Becker. Each image is represented by the last map ξk of the CKN, which is used in a linear SVM implemented in the software package LibLinear [16]. These representations are centered, rescaled to have unit ℓ2-norm on average, and the regularization parameter of the SVM is always selected on a validation set or by 5-fold cross-validation in the range 2i, i = −15 . . . , 15. The patches P′ k are typically small; we tried the sizes m × m with m = 3, 4, 5 for the first layer, and m = 2, 3 for the upper ones. The number of filters pk in our experiments is in the set {50, 100, 200, 400, 800}. The downsampling factor γk is always chosen to be 2 between two consecutive layers, whereas the last layer is downsampled to produce final maps ξk of a small size—say, 5×5 or 4×4. For the gradient map ϕ0, we approximate the Gaussian kernel e(1/σ2 1)∥ϕ0(z)−ϕ′ 0(z′)∥H0 by uniformly sampling p1 = 12 orientations, setting σ1 = 2π/p1. Finally, we also use a small offset ε to prevent numerical instabilities in the normalization steps ˜ψ(z) = ψ(z)/ max(∥ψ(z)∥2, ε). 4.1 Discovering the Structure of Natural Image Patches Unsupervised learning was first used for discovering the underlying structure of natural image patches by Olshausen and Field [23]. Without making any a priori assumption about the data except a parsimony principle, the method is able to produce small prototypes that resemble Gabor wavelets—that is, spatially localized oriented basis functions. The results were found impressive by the scientific community and their work received substantial attention. It is also known that such results can also be achieved with CNNs [25]. We show in this section that this is also the case for convolutional kernel networks, even though they are not explicitly trained to reconstruct data. Following [23], we randomly select a database of 300 000 whitened natural image patches of size 12 × 12 and learn p = 256 filters W using the formulation (4). We initialize W with Gaussian random noise without performing the K-means step, in order to ensure that the output we obtain is not an artifact of the initialization. In Figure 3, we display the filters associated to the top-128 largest weights ηl. Among the 256 filters, 197 exhibit interpretable Gabor-like structures and the rest was less interpretable. To the best of our knowledge, this is the first time that the explicit kernel map of the Gaussian kernel for whitened natural image patches is shown to be related to Gabor wavelets. 4.2 Digit Classification on MNIST The MNIST dataset [22] consists of 60 000 images of handwritten digits for training and 10 000 for testing. We use two types of initial maps in our networks: the “patch map”, denoted by CNKPM and the “gradient map”, denoted by CNK-GM. We follow the evaluation methodology of [25] Figure 3: Filters obtained by the first layer of the convolutional kernel network on natural images. 7 Tr. CNN Scat-1 Scat-2 CKN-GM1 CKN-GM2 CKN-PM1 CKN-PM2 [32] [18] [19] size [25] [8] [8] (12/50) (12/400) (200) (50/200) 300 7.18 4.7 5.6 4.39 4.24 5.98 4.15 NA 1K 3.21 2.3 2.6 2.60 2.05 3.23 2.76 NA 2K 2.53 1.3 1.8 1.85 1.51 1.97 2.28 NA 5K 1.52 1.03 1.4 1.41 1.21 1.41 1.56 NA 10K 0.85 0.88 1 1.17 0.88 1.18 1.10 NA 20K 0.76 0.79 0.58 0.89 0.60 0.83 0.77 NA 40K 0.65 0.74 0.53 0.68 0.51 0.64 0.58 NA 60K 0.53 0.70 0.4 0.58 0.39 0.63 0.53 0.47 0.45 0.53 Table 1: Test error in % for various approaches on the MNIST dataset without data augmentation. The numbers in parentheses represent the size p1 and p2 of the feature maps at each layer. for comparison when varying the training set size. We select the regularization parameter of the SVM by 5-fold cross validation when the training size is smaller than 20 000, or otherwise, we keep 10 0000 examples from the training set for validation. We report in Table 1 the results obtained for four simple architectures. CKN-GM1 is the simplest one: its second layer uses 3×3 patches and only p2 = 50 filters, resulting in a network with 5 400 parameters. Yet, it achieves an outstanding performance of 0.58% error on the full dataset. The best performing, CKN-GM2, is similar to CKN-GM1 but uses p2 = 400 filters. When working with raw patches, two layers (CKN-PM2) gives better results than one layer. More details about the network architectures are provided in the supplementary material. In general, our method achieves a state-of-the-art accuracy for this task since lower error rates have only been reported by using data augmentation [11]. 4.3 Visual Recognition on CIFAR-10 and STL-10 We now move to the more challenging datasets CIFAR-10 [20] and STL-10 [13]. We select the best architectures on a validation set of 10 000 examples from the training set for CIFAR-10, and by 5-fold cross-validation on STL-10. We report in Table 2 results for CKN-GM, defined in the previous section, without exploiting color information, and CKN-PM when working on raw RGB patches whose mean color is subtracted. The best selected models have always two layers, with 800 filters for the top layer. Since CKN-PM and CKN-GM exploit a different information, we also report a combination of such two models, CKN-CO, by concatenating normalized image representations together. The standard deviations for STL-10 was always below 0.7%. Our approach appears to be competitive with the state of the art, especially on STL-10 where only one method does better than ours, despite the fact that our models only use 2 layers and require learning few parameters. Note that better results than those reported in Table 2 have been obtained in the literature by using either data augmentation (around 90% on CIFAR-10 for [18, 30]), or external data (around 70% on STL-10 for [28]). We are planning to investigate similar data manipulations in the future. Method [12] [27] [18] [13] [4] [17] [32] CKN-GM CKN-PM CKN-CO CIFAR-10 82.0 82.2 88.32 79.6 NA 83.96 84.87 74.84 78.30 82.18 STL-10 60.1 58.7 NA 51.5 64.5 62.3 NA 60.04 60.25 62.32 Table 2: Classification accuracy in % on CIFAR-10 and STL-10 without data augmentation. 5 Conclusion In this paper, we have proposed a new methodology for combining kernels and convolutional neural networks. We show that mixing the ideas of these two concepts is fruitful, since we achieve near state-of-the-art performance on several datasets such as MNIST, CIFAR-10, and STL10, with simple architectures and no data augmentation. Some challenges regarding our work are left open for the future. The first one is the use of supervision to better approximate the kernel for the prediction task. The second consists in leveraging the kernel interpretation of our convolutional neural networks to better understand the theoretical properties of the feature spaces that these networks produce. Acknowledgments This work was partially supported by grants from ANR (project MACARON ANR-14-CE23-000301), MSR-Inria joint centre, European Research Council (project ALLEGRO), CNRS-Mastodons program (project GARGANTUA), and the LabEx PERSYVAL-Lab (ANR-11-LABX-0025). 8 References [1] Y. Bengio. Learning deep architectures for AI. Found. Trends Mach. Learn., 2009. [2] L. Bo, K. Lai, X. Ren, and D. Fox. Object recognition with hierarchical kernel descriptors. In Proc. CVPR, 2011. [3] L. Bo, X. Ren, and D. Fox. Kernel descriptors for visual recognition. In Adv. NIPS, 2010. [4] L. Bo, X. Ren, and D. Fox. Unsupervised feature learning for RGB-D based object recognition. In Experimental Robotics, 2013. [5] L. Bo and C. Sminchisescu. Efficient match kernel between sets of features for visual recognition. In Adv. NIPS, 2009. [6] L. Bottou, O. Chapelle, D. DeCoste, and J. Weston. Large-Scale Kernel Machines (Neural Information Processing). The MIT Press, 2007. [7] J. V. Bouvrie, L. Rosasco, and T. Poggio. On invariance in hierarchical models. In Adv. NIPS, 2009. [8] J. Bruna and S. Mallat. Invariant scattering convolution networks. IEEE T. Pattern Anal., 35(8):1872– 1886, 2013. [9] R. H. Byrd, P. Lu, J. Nocedal, and C. Zhu. A limited memory algorithm for bound constrained optimization. SIAM J. Sci. Comput., 16(5):1190–1208, 1995. [10] Y. Cho and L. K. Saul. Large-margin classification in infinite neural networks. Neural Comput., 22(10), 2010. [11] D. Ciresan, U. Meier, and J. Schmidhuber. Multi-column deep neural networks for image classification. In Proc. CVPR, 2012. [12] A. Coates and A. Y. Ng. Selecting receptive fields in deep networks. In Adv. NIPS, 2011. [13] A. Coates, A. Y. Ng, and H. Lee. An analysis of single-layer networks in unsupervised feature learning. In Proc. AISTATS, 2011. [14] D. Decoste and B. Sch¨olkopf. Training invariant support vector machines. Mach. Learn., 46(1-3):161– 190, 2002. [15] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. DeCAF: A deep convolutional activation feature for generic visual recognition. preprint arXiv:1310.1531, 2013. [16] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. LIBLINEAR: A library for large linear classification. J. Mach. Learn. Res., 9:1871–1874, 2008. [17] R. Gens and P. Domingos. Discriminative learning of sum-product networks. In Adv. NIPS, 2012. [18] I. J. Goodfellow, D. Warde-Farley, M. Mirza, A. Courville, and Y. Bengio. Maxout networks. In Proc. ICML, 2013. [19] K. Jarrett, K. Kavukcuoglu, M. Ranzato, and Y. LeCun. What is the best multi-stage architecture for object recognition? In Proc. ICCV, 2009. [20] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Tech. Rep., 2009. [21] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Adv. NIPS, 2012. [22] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. P. IEEE, 86(11):2278–2324, 1998. [23] B. A. Olshausen and D. J. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381(6583):607–609, 1996. [24] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In Adv. NIPS, 2007. [25] M. Ranzato, F.-J. Huang, Y-L. Boureau, and Y. LeCun. Unsupervised learning of invariant feature hierarchies with applications to object recognition. In Proc. CVPR, 2007. [26] J. Shawe-Taylor and N. Cristianini. Kernel methods for pattern analysis. 2004. [27] K. Sohn and H. Lee. Learning invariant representations with local transformations. In Proc. ICML, 2012. [28] K. Swersky, J. Snoek, and R. P. Adams. Multi-task Bayesian optimization. In Adv. NIPS, 2013. [29] G. Wahba. Spline models for observational data. SIAM, 1990. [30] L. Wan, M. D. Zeiler, S. Zhang, Y. LeCun, and R. Fergus. Regularization of neural networks using dropconnect. In Proc. ICML, 2013. [31] C. Williams and M. Seeger. Using the Nystr¨om method to speed up kernel machines. In Adv. NIPS, 2001. [32] M. D. Zeiler and R. Fergus. Stochastic pooling for regularization of deep convolutional neural networks. In Proc. ICLR, 2013. [33] M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In Proc. ECCV, 2014. 9
|
2014
|
201
|
5,294
|
Learning Chordal Markov Networks by Dynamic Programming Kustaa Kangas Teppo Niinim¨aki Mikko Koivisto Helsinki Institute for Information Technology HIIT Department of Computer Science, University of Helsinki {jwkangas,tzniinim,mkhkoivi}@cs.helsinki.fi Abstract We present an algorithm for finding a chordal Markov network that maximizes any given decomposable scoring function. The algorithm is based on a recursive characterization of clique trees, and it runs in O(4n) time for n vertices. On an eight-vertex benchmark instance, our implementation turns out to be about ten million times faster than a recently proposed, constraint satisfaction based algorithm (Corander et al., NIPS 2013). Within a few hours, it is able to solve instances up to 18 vertices, and beyond if we restrict the maximum clique size. We also study the performance of a recent integer linear programming algorithm (Bartlett and Cussens, UAI 2013). Our results suggest that, unless we bound the clique sizes, currently only the dynamic programming algorithm is guaranteed to solve instances with around 15 or more vertices. 1 Introduction Structure learning in Markov networks, also known as undirected graphical models or Markov random fields, has attracted considerable interest in computational statistics, machine learning, and artificial intelligence. Natural score-and-search formulations of the task have, however, proved to be computationally very challenging. For example, Srebro [1] showed that finding a maximum-likelihood chordal (or triangulated or decomposable) Markov network is NP-hard even for networks of treewidth at most 2, in sharp contrast to the treewidth-1 case [2]. Consequently, various approximative approaches and local search heuristics have been proposed [3, 1, 4, 5, 6, 7, 8, 9, 10, 11]. Only very recently, Corander et al. [12] published the first non-trivial algorithm that is guaranteed to find a globally optimal chordal Markov network. It is based on expressing the search space in terms of logical constraints and employing the state-of-the-art solver technology equipped with optimization capabilities. To this end, they adopt the usual clique tree, or junction tree, representation of chordal graphs, and work with a particular characterization of clique trees, namely, that for any vertex of the graph the cliques containing that vertex induce a connected subtree in the clique tree. The key idea is to rephrase this property as what they call a balancing condition: for any vertex, the number of cliques that contain it is one larger than the number of edges (the intersection of the adjacent cliques) that contain it. They show that with appropriate, efficient encodings of the constraints, an eight-vertex instance can be solved to the optimum in a few days of computing, which could have been impossible by a brute-force search. However, while the constraint satisfaction approach enables exploiting the powerful technology, it is currently not clear, whether it scales to larger instances. Here, we investigate an alternative approach to find an optimal chordal Markov network. Like the work of Corander at al. [12], our algorithm stems from a particular characterization of clique trees of chordal graphs. However, our characterization is quite different, being recursive in nature. It concords the structure of common scoring functions and so yields a natural dynamic programming algorithm that grows an optimal clique tree by selecting its cliques one by one. In its basic form, the algorithm 1 is very inefficient. Fortunately, the fine structure of the scoring function enables us to further factorize the main dynamic programming step and so bring the time requirement down to O(4n) for instances with n vertices. We also show that by setting the maximum clique size, equivalently the treewidth (plus one), to w ≤n/4, the time requirement can be improved to O 3n−w n w w . While our recursive characterization of clique trees and the resulting dynamic programming algorithm are new, they are similar in spirit to a recent work by Korhonen and Parviainen [13]. Their algorithm finds a bounded-treewidth Bayesian network structure that maximizes a decomposable score, running in 3nnw+O(1) time, where w is the treewidth bound. For large w it thus is superexponentially slower than our algorithm. The problems solved by the two algorithms are, of course, different: the class of treewidth-w Bayesian networks properly extends the class of treewidth-w chordal Markov networks. There is also more recent work for finding bounded-treewidth Bayesian networks by employing constraint solvers: Berg et al. [14] solve the problem by casting into maximum satisfiability, while Parviainen et al. [15] cast into integer linear programming. For unbounded-treewidth Bayesian networks, O(2nn2)-time algorithms based on dynamic programming are available [16, 17, 18]. However, none of these dynamic programming algorithms, nor their A* search based variant [19], enables adding the constraints of chordality or bounded width. But the integer linear programming approach to finding optimal Bayesian networks, especially the recent implementation by Bartlett and Cussens [20], also enables adding the further constraints.1 We are not aware of any reasonable worst-case bounds for the algorithm’s time complexity, nor any previous applications of the algorithm to the problem of learning chordal Markov networks. As a second contribution of this paper, we report on an experimental study of the algorithm’s performance, using both synthetic data and some frequently used machine learning benchmark datasets. The remainder of this article begins by formulating the learning task as an optimization problem. Next we present our recursive characterization of clique trees and a derivation of the dynamic programming algorithm, with a rigorous complexity analysis. The experimental setting and results are reported in a dedicated section. We end with a brief discussion. 2 The problem of learning chordal Markov networks We adopt the hypergraph treatment of chordal Markov networks. For a gentler presentation and proofs, see Lauritzen and Spiegelhalter [21, Sections 6 and 7], Lauritzen [22], and references therein. Let p be a positive probability function over a product of n state spaces. Let G be an undirected graph on the vertex set V = {1, . . . , n}, and call any maximal set of pairwise adjacent vertices of G a clique. Together, G and p form a Markov network if p(x1, . . . , xn) = Q C ψC(xC), where C runs through the cliques of G and each ψC is a mapping to positive reals. Here xC denotes (xv : v ∈C). The factors ψC take a particularly simple form when the graph G is chordal, that is, when every cycle of G of length greater than three has a chord, which is an edge of G joining two nonconsecutive vertices of the cycle. The chordality requirement can be expressed in terms of hypergraphs. Consider first an arbitrary hypergraph on V , identified with a collection C of subsets of V such that each element of V belongs to some set in C. We call C reduced if no set in C is a proper subset of another set in C, and acyclic if, in addition, the sets in C admit an ordering C1, . . . , Cm that has the running intersection property: for each 2 ≤j ≤m, the intersection Sj = Cj ∩(C1 ∪· · · ∪Cj−1) is a subset of some Ci with i < j. We call the sets Sj the separators. The multiset of separators, denoted by S, does not depend on the ordering and is thus unique for an acyclic hypergraph. Now, letting C be the set of cliques of the chordal graph G, it is known that the hypergraph C is acyclic and that each factor ψCj(xCj) can be specified as the ratio p(xCj)/p(xSj) of marginal probabilities (where we define p(xS1) = 1). Also the converse holds: by connecting all pairs of vertices within each set of an acyclic hypergraph we obtain a chordal graph. Given multiple observations over the product state space, the data, we associate with each hypergraph C on V a score s(C) = Q C∈C p(C) Q S∈S p(S), where the local score p(A) measures the probability (density) of the data projected on A ⊆V , possibly extended by some structure prior or penalization term. The structure learning problem is to find an acyclic hypergraph C on V that 1We thank an anonymous reviewer of an earlier version of this work for noticing this fact, which apparently was not well known in the community, including the authors and reviewers of Corander’s et al. work [12]. 2 maximizes the score s(C). This formulation covers a Bayesian approach, in which each p(A) is the marginal likelihood for the data on A under a Dirichlet–multinomial model [23, 7, 12], but also the maximum-likelihood formulation, in which each p(A) is the empirical probability of the data on A [23, 1]. Motivated by these instantiations, we will assume that for any given A the value p(A) can be efficiently computed, and we treat the values as the problem input. Our approach to the problem exploits the fact [22, Prop. 2.27] that a reduced hypergraph C is acyclic if and only if there is a junction tree T for C, that is, an undirected tree on the node set C that has the junction property (JP): for any two nodes A and B in C and any C on the unique path in T between A and B we have A ∩B ⊆C. Furthermore, by labeling each edge of T by the intersection of its endpoints, the edge labels amount to the multiset of separators of the hypergraph C. Thus a junction tree gives the separators explicitly, which motivates us to write s(T ) for the respective score s(C) and solve the structure learning problem by finding a junction tree T over V that maximizes s(T ). Here and henceforth, we say that a tree is over a set if the union of the tree’s nodes equals the set. As our problem formulation does not explicitly refer to the underlying chordal graph and cliques, we will speak of junction trees instead of equivalent but semantically more loaded clique trees. From here on, a junction tree refers specifically to a junction tree whose node set is a reduced hypergraph. 3 Recursive characterization and dynamic programming The score of a junction tree obeys a recursive factorization along subtrees (by rooting the tree at any node), given in Section 3.2 below. While this is the essential structural property of the score for our dynamic programming algorithm, it does not readily yield the needed recurrence for the optimal score. Indeed, we need a characterization of, not a fixed junction tree, but the entire search space of junction trees that concords the factorization of the score. We next give such a characterization before we proceed to the derivation and analysis of the dynamic programming algorithm. 3.1 Recursive partition trees We characterize the set of junction trees by expressing the ways in which they can partition V . The idea is that when any tree of interest is rooted at some node, the subtrees amount to a partition of not only the remaining nodes in the tree (which holds trivially) but also the remaining vertices (contained in the nodes); and the subtrees also satisfy this property. See Figure 1 for an illustration. If T is a tree over a set S, we write C(T ) for its node set and V (T ) for the union of its nodes, S. For a family R of subsets of a set S, we say that R is a partition of S and denote R ⊏S if the members of R are non-empty and pairwise disjoint, and their union is S. Definition 1 (Recursive partition tree, RPT). Let T be a tree over a finite set V , rooted at C ∈ C(T ). Denote by C1, . . . , Ck the children of C, by Ti the subtree rooted at Ci, and let Ri = V (Ti)\C. We say that T is a recursive partition tree (RPT) if it satisfies the following three conditions: (R1) each Ti is a RPT over Ci ∪Ri, (R2) {R1, . . . , Rk} ⊏V \ C, and (R3) C ∩Ci is a proper subset of both C and Ci. We denote by RPT(V, C) the set of all RPTs over V rooted at C. We now present the following theorems to establish that, when edge directions are ignored, the definitions of junction trees and recursive partition trees are equivalent. Theorem 1. A junction tree T is a RPT when rooted at any C ∈C(T ). Theorem 2. A RPT is a junction tree (when considered undirected). Our proofs of these results will use the following two observations: Observation 3. A subtree of a junction tree is also a junction tree. Observation 4. If T is a RPT, so is its every subtree rooted at any C ∈C(T ). Proof of Theorem 1. Let T be a junction tree over V and consider an arbitrary C ∈C(T ). We show by induction over the number of nodes that T is a RPT when rooted at C. Let Ci, Ti, and Ri be defined as in Definition 1 and consider the three RPT conditions. If C is the only node in T , the conditions hold trivially. Assume they hold up to n −1 nodes and consider the case |C(T )| = n. We show that each condition holds. 3 0 1 2 3 5 4 6 7 8 9 Figure 1: An example of a chordal graph and a corresponding recursive partition. The root node C = {3, 4, 5} (dark grey) partitions the remaining vertices into three disjoint sets R1 = {0, 1, 2}, R2 = {6}, and R3 = {7, 8, 9} (light grey), which are connected to the root node by its child nodes C1 = {1, 2, 3}, C2 = {4, 5, 6}, and C3 = {5, 7} respectively (medium grey). (R1) By Observation 3 each Ti is a junction tree and thus, by the induction assumption, a RPT. It remains to show that V (Ti) = Ci ∪Ri. By definition both Ci ⊆V (Ti) and Ri ⊆V (Ti). Thus Ci ∪Ri ⊆V (Ti). Assume then that x ∈V (Ti), i.e. x ∈C′ for some C′ ∈C(Ti). If x /∈Ri, then by definition x ∈C. Since Ci is on the path between C and C′, by JP x ∈Ci. Therefore V (Ti) ⊆Ci ∪Ri. (R2) We show that the sets Ri partition V \ C. First, each Ri is non-empty since by definition of reduced hypergraph Ci is non-empty and not contained in C. Second, S i Ri = S i(V (Ti) \ C) = (C ∪S i V (Ti)) \ C = S C(T ) \ C = V \ C. Finally, to see that Ri are pairwise disjoint, assume to the contrary that x ∈Ri ∩Rj for distinct Ri and Rj. This implies x ∈A ∩B for some A ∈C(Ti) and B ∈C(Tj). Now, by JP x ∈C, which contradicts the definition of Ri. (R3) Follows by the definition of reduced hypergraph. Proof of Theorem 2. Assume now that T is a RPT over V . We show that T is a junction tree. To see that T has JP, consider arbitrary A, B ∈C(T ). We show that A ∩B is a subset of every C ∈C(T ) on the path between A and B. Consider first the case that A is an ancestor of B and let B = C1, . . . , Cm = A be the path that connects them. We show by induction over m that C1 ∩Cm ⊆Ci for every i = 1, . . . , m. The base case m = 1 is trivial. Assume m > 1 and the claim holds up to m −1. If i = m, the claim is trivial. Let i < m. Denote by Tm−1 the subtree rooted at Cm−1 and let Rm−1 = V (Tm−1) \ Cm. Since C1 ⊆V (Tm−1) we have that C1 ∩Cm = (C1 ∩V (Tm−1)) ∩Cm = C1 ∩(Cm ∩V (Tm−1)). By Observation 4 Tm−1 is a RPT. Therefore, from (R1) it follows that V (Tm−1) = Cm−1 ∪Rm−1 and thus Cm ∩V (Tm−1) = (Cm ∩Cm−1) ∪(Cm ∩Rm−1) = Cm ∩Cm−1. Plugging this above and using the induction assumption we get C1 ∩Cm = C1 ∩(Cm ∩Cm−1) ⊆C1 ∩Cm−1 ⊆Ci. Consider now the case that A and B have a least common ancestor C. By Observation 4, the subtree rooted at C is a RPT. Thus, by (R1) and (R2) there are disjoint R and R′ such that A ⊆C ∪R and B ⊆C ∪R′. Thus, A ∩B ⊆C, and consequently A ∩B ⊆A ∩C. As we proved above, A ∩C is a subset of every node on the path between A and C, and therefore A ∩B is also a subset of every such node. Similarly, A ∩B is a subset of every node on the path between B and C. Combining these results, we have that A ∩B is a subset of every node on the path between A and B. Finally, to see that C(T ) is reduced, assume the opposite, that A ⊆B for distinct A, B ∈C(T ). Let C be the node next to A on the path from A to B. By the initial assumption and JP A ⊆A ∩B ⊆C. As either A or C is a child of the other, this contradicts (R3) in the subtree rooted at the parent. 3.2 The main recurrence We want to find a junction tree T over V that maximizes the score s(T ). By Theorems 1 and 2 this is equivalent to finding a RPT T that maximizes s(T ). Let T be a RPT rooted at C and denote by C1, . . . , Ck the children of C and by Ti the subtree rooted at Ci. Then, the score factorizes as follows s(T ) = p(C) k Y i=1 s(Ti) p(C ∩Ci) . (1) To see this, observe that each term of s(T ) is associated with a particular node or edge (separator) of T . Thus the product of the s(Ti) consists of exactly the terms of s(T ), except for the ones associated with the root C of T and the edges between C and each Ci. 4 To make use of the above factorization, we introduce suitable constraints under which an optimal tree can be constructed from subtrees that are, in turn, optimal with respect to analogous constraints (cf. Bellman’s principle of optimality). Specifically, we define a function f that gives the score of an optimal subtree over any subset of nodes as follows: Definition 2. For S ⊂V and ∅̸= R ⊆V \ S, let f(S, R) be the score of an optimal RPT over S ∪R rooted at a proper superset of S. That is f(S, R) = max S ⊂C ⊆S ∪R T ∈RPT(S∪R,C) s(T ) . Corollary 5. The score of an optimal RPT over V is given by f(∅, V ). We now show that f admits the following recurrence, which shall be used as the basis of our dynamic programming algorithm. Lemma 6. Let S ⊂V and ∅̸= R ⊆V \ S. Then f(S, R) = max S ⊂C ⊆S ∪R {R1, . . . , Rk} ⊏R \ C S1, . . . , Sk ⊂C p(C) k Y i=1 f(Si, Ri) p(Si) . Proof. We first show inductively that the recurrence is well defined. Assume that the conditions S ⊂V and ∅̸= R ⊆V \ S hold. Observe that R is non-empty, every set has a partition, and C is selected to be non-empty. Therefore, all three maximizations are over non-empty ranges and it remains to show that the product over i = 1, . . . , k is well defined. If |R| = 1, then R \ C = ∅and the product equals 1 by convention. Assume now that f(S, R) is defined when |R| < m and consider the case |R| = m. By construction Si ⊂V , ∅̸= Ri ⊆V \ Si and |Ri| < |R| for every i = 1, . . . , k. Thus, by the induction assumption each f(Si, Ri) is defined and therefore the product is defined. We now show that the recurrence indeed holds. Let the root C in Definition 2 be fixed and consider the maximization over the trees T . By Definition 1, choosing a tree T ∈RPT(S ∪R, C) is equivalent to choosing sets R1, . . . , Rk, sets C1, . . . , Ck, and trees T1, . . . , Tk such that (R0) Ri = V (Ti) \ C, (R1) Ti is a RPT over Ci ∪Ri rooted at Ci, (R2) {R1, . . . , Rk} ⊏(S ∪R) \ C, and (R3) C ∩Ci is a proper subset of C and Ci. Observe first that (S ∪R) \ C = R \ C and therefore (R2) is equivalent to choosing sets Ri such that {R1, . . . , Rk} ⊏R \ C. Denote by Si the intersection C ∩Ci. We show that together (R0) and (R1) are equivalent to saying that Ti is a RPT over Si ∪Ri rooted at Ci. Assume first that the conditions are true. By (R1) it’s sufficient to show that Ci ∪Ri = Si ∪Ri. From (R1) it follows that Ci ⊆V (Ti) and therefore Ci \ C ⊆V (Ti) \ C, which by (R0) implies Ci \ C ⊆Ri. This in turn implies Ci ∪Ri = (Ci ∩C) ∪(Ci \ C) ∪Ri = Si ∪Ri. Assume then that Ti is a RPT over Si ∪Ri rooted at Ci. Condition (R0) holds since V (Ti) \ C = (Si ∪Ri) \ C = (Si \ C) ∪(Ri \ C) = ∅∪Ri = Ri. Condition (R1) holds since Si ⊆Ci ⊆V (Ti) = Si ∪Ri and thus Si ∪Ri = Ci ∪Ri. Finally observe that (R3) is equivalent to first choosing Si ⊂C and then Ci ⊃Si. By (R1) it must also be that Ci ⊆V (Ti) = Si ∪Ri. Based on these observations, we can now write f(S, R) = max S ⊂C ⊆S ∪R {R1, . . . , Rk} ⊏R \ C S1,...,Sk⊂C ∀i:Si⊂Ci⊆Ri∪Si ∀i:Ti is a RPT over Si ∪Ri rooted at Ci s(T ) . Next we factorize s(T ) using the factorization (1) of the score. In addition, once a root C, a partition {R1, . . . , Rk}, and separators {S1, . . . , Sk} have been fixed, then each pair (Ci, Ti) can be chosen independently for different i. Thus, the above maximization can be written as max S ⊂C ⊆S ∪R {R1, . . . , Rk} ⊏R \ C S1,...,Sk⊂C p(C) k Y i=1 1 p(Si) · max Si⊂Ci⊆Ri∪Si Ti∈RPT(Si∪Ri,Ci) s(Ti) . By applying Definition 2 to the inner maximization the claim follows. 5 3.3 Fast evaluation The direct evaluation of the recurrence in Lemma 6 would be very inefficient, especially since it involves maximization over all partitions of the vertex set. In order to evaluate it more efficiently, we decompose it into multiple recurrences, each of which can take advantage of dynamic programming. Observe first that we can rewrite the recurrence as f(S, R) = max S ⊂C ⊆S ∪R {R1, . . . , Rk} ⊏R \ C p(C) k Y i=1 h(C, Ri) , (2) where h(C, R) = max S⊂C f(S, R) p(S) . (3) We have simply moved the maximization over Si ⊂C inside the product and written each factor using a new function h. Due to how the sets C and Ri are selected, the arguments to h are always non-empty and disjoint subsets of V . In a similar fashion, we can further rewrite recurrence 2 as f(S, R) = max S⊂C⊆S∪R p(C)g(C, R \ C) , (4) where we define g(C, U) = max {R1,...,Rk}⊏U k Y i=1 h(C, Ri) . Again, note that C and U are disjoint and C is non-empty. If U = ∅, then g(C, U) = 1. Otherwise g(C, U) = max ∅̸=R⊆U h(C, R) max {R2,...,Rk}⊏U\R k Y i=2 h(C, Ri) = max ∅̸=R⊆U h(C, R)g(C, U \ R) . (5) Thus, we have split the original recurrence into three simpler recurrences (4,5,3). We now obtain a straightforward dynamic programming algorithm that evaluates f, g and h using these recurrences with memoization, and then outputs the score f(∅, V ) of an optimal RPT. 3.4 Time and space requirements We measure the time requirement by the number of basic operations, namely comparisons and arithmetic operations, executed for pairs of real numbers. Likewise, we measure the space requirement by the maximum number of real values stored at any point during the execution of the algorithm. We consider both time and space in the more general setting where the width w ≤n of the optimal network is restricted by selecting every node (clique) C in recurrence (4) with the constraint |C| ≤w. We prove the following bounds by counting, for each of the three functions, the associated subset triplets that meet the applicable disjointness, inclusion, and cardinality constraints: Theorem 7. Let V be a set of size n and w ≤n. Given the local scores of the subsets of V of size at most w as input, a maximum-score junction tree over V of width at most w can be found using 6 Pw i=0 n i 3n−i basic operations and having a storage for 3 Pw i=0 n i 2n−i real numbers. Proof. To bound the number of basic operations needed, we consider the evaluation of each the functions f, g, and h using the recurrences (4,5,3). Consider first f. Due to memoization, the algorithm executes at most two basic operations (one comparison and one multiplication) per triplet (S, R, C), with S and R disjoint, S ⊂C ⊆S ∪R, and |C| ≤w. Subject to these constraints, a set C of size i can be chosen in n i ways, the set S ⊂C in at most 2i ways, and the set R \C in 2n−i ways. Thus, the number of basic operations needed is at most Nf = 2 Pw i=0 n i 2n−i2i = 2n+1 Pw i=0 n i . Similarly, for h the algorithm executes at most two basic operations per triplet (C, R, S), with now C and R disjoint, |C| ≤w, and S ⊂C. A calculation gives the same bound as for f. Finally consider g. Now the algorithm executes at most two basic operations per triplet (C, U, R), with C and U disjoint, |C| ≤w, and ∅̸= R ⊆U. A set C of size i can be chosen in n i ways, and the remaining n −i elements can be assigned into U and its subset R in 3n−i ways. Thus, the number of basic operations 6 w = 3 w = 4 w = 5 w = 6 w = ∞ 8 10 12 14 16 18 1s 60s 1h Junctor, any GOBNILP, large GOBNILP, medium GOBNILP, small 8 10 12 14 16 18 1s 60s 1h 8 10 12 14 16 18 1s 60s 1h 8 10 12 14 16 18 1s 60s 1h 8 10 12 14 16 18 1s 60s 1h 8 10 12 14 16 18 1s 60s 1h 8 10 12 14 16 18 1s 60s 1h 8 10 12 14 16 18 1s 60s 1h 8 10 12 14 16 18 1s 60s 1h 8 10 12 14 16 18 1s 60s 1h Figure 2: The running time of Junctor and GOBNILP as a function of the number of vertices for varying widths w, on sparse (top) and dense (bottom) synthetic instances with 100 (“small”), 1000 (“medium”), and 10,000 (“large”) data samples. The dashed red line indicates the 4-hour timeout or memout. For GOBNILP shown is the median of the running times on 15 random instances. needed is at most Ng = 2 Pw i=0 n i 3n−i. Finally, it is sufficient to observe that there is a j such that n i 3n−i is larger than n i 2n when i ≤j, and smaller when i > j. Now because both terms sum up to the same value 4n when i = 0, . . . , n, the bound Ng is always greater or equal to Nf. We bound the storage requirement in a similar manner. For each function, the size of the first argument is at most w and the second argument is disjoint from the first, yielding the claimed bound. Remark 1. For w = n, the bounds for the number of basic operations and storage requirement in Theorem 7 become 6 · 4n and 3 · 3n, respectively. When w ≤n/4, the former bound can be replaced by 6w n w 3n−w, since n i 3n−i ≤ n i+1 3n−i−1 if and only if i ≤(n −3)/4. Remark 2. Memoization requires indexing with pairs of disjoint sets. Representing sets as integers allows efficient lookups to a two-dimensional array, using O(4n) space. We can achieve O(3n) space by mapping a pair of sets (A, B) to Pn a=1 3a−1Ia(A, B) where Ia(A, B) is 1 if a ∈A, 2 if a ∈B, and 0 otherwise. Each pair gets a unique index from 0 to 3n −1 to a compact array. A na¨ıve evaluation of the index adds an O(n) factor to the running time. This can be improved to constant amortized time by updating the index incrementally while iterating over sets. 4 Experimental results We have implemented the presented algorithm in a C++ program Junctor (Junction Trees Optimally Recursively).2 In the experiments reported below, we compared the performance of Junctor and the integer linear programming based solver GOBNILP by Bartlett and Cussens [20]. While GOBNILP has been tailored for finding an optimal Bayesian network, it enables forbidding the so-called v-structures in the network and, thereby, finding an optimal chordal Markov network, provided that we use the BDeu score, as we have done, or some other special scoring function [23, 24]. We note that when forbidding v-structures, the standard score pruning rules [20, 25] are no longer valid. We first investigated the performance on synthetic data generated from Bayesian networks of varying size and density. We generated 15 datasets for each combination of the number of vertices n from 8 to 18, maximum indegree k = 4 (sparse) or k = 8 (dense), and the number of samples m equaling 100, 1000, or 10,000, as follows: Along a random vertex ordering, we first drew for each vertex the number of its parents from the uniform distribution between 0 and k and then the actual parents uniformly at random from its predecessors in the vertex ordering. Next, we assigned each vertex two possible states and drew the parameters of the conditional distributions from the uniform distribution. Finally, from the obtained joint distribution, we drew m independent samples. The input for Junctor and 2Junctor is publicly available at www.cs.helsinki.fi/u/jwkangas/junctor/. 7 Table 1: Benchmark instances with different numbers of attributes (n) and samples (m). Dataset Abbr. n m Tic-tac-toe X 10 958 Poker P 11 10000 Bridges B 12 108 Flare F 13 1066 Zoo Z 17 101 Dataset Abbr. n m Voting V 17 435 Tumor T 18 339 Lymph L 19 148 Hypothyroid 22 3772 Mushroom 22 8124 w = 3 w = 4 w = 5 w = 6 w = ∞ 1s 60s 1h Junctor 1s 60s 1h GOBNILP B F L P X T V Z 1s 60s 1h Junctor 1s 60s 1h GOBNILP B F L P X T V Z 1s 60s 1h Junctor 1s 60s 1h GOBNILP B F L P X T V Z 1s 60s 1h Junctor 1s 60s 1h GOBNILP B F L P X T V Z 1s 60s 1h Junctor 1s 60s 1h GOBNILP B F L P X T VZ Figure 3: The running time of Junctor against GOBNILP on the benchmark instances with at most 19 attributes, given in Table 1. The dashed red line indicates the 4-hour timeout or memout. GOBNILP was produced using the BDeu score with equivalent sample size 1. For both programs, we varied the maximum width parameter w from 3 to 6 and, in addition, examined the case of unbounded width (w = ∞). Because the performance of Junctor only depends on n and w, we ran it only once for each combination of the two. In contrast, the performance of GOBNILP is very sensitive to various characteristics of the data, and therefore we ran it for all the combinations. All runs were allowed 4 CPU hours and 32 GB of memory. The results (Figure 2) show that for large widths Junctor scales better than GOBNILP (with respect to n), and even for low widths Junctor is superior to GOBNILP for smaller n. We found GOBNILP to exhibit moderate variance: 93% of all running times (excluding timeouts) were within a factor of 5 of the respective medians shown in Figure 2, while 73% were within a factor of 2. We observe that the running time of GOBNILP may behave “discontinuously” (e.g., small datasets around 15 vertices with width 4). We also evaluated both programs on several benchmark instances taken from the UCI repository [26]. The datasets are summarized in Table 1. Figure 3 shows the results on the instances with at most 19 attributes, for which the runs were, again, allowed 4 CPU hours and 32 GB of memory. The results are qualitatively in well agreement with the results obtained with synthetic data. For example, solving the Bridges dataset on 12 attributes with width 5, takes less than one second by Junctor but around 7 minutes by GOBNILP. For the two 22-attribute datasets we allowed both programs one week of CPU time and 128 GB of memory. Junctor was able to solve each within 33 hours for w = 3 and within 74 hours for w = 4. GOBNILP was able to solve Hypothyroid up to w = 6 (in 24 hours, or less for small widths), but Mushroom only up to w = 3. For higher widths GOBNILP ran out of time. 5 Concluding remarks We have investigated the structure learning problem in chordal Markov networks. We showed that the commonly used scoring functions factorize in a way that enables a relatively efficient dynamic programming treatment. Our algorithm is the first that is guaranteed to solve moderate-size instances to the optimum within reasonable time. For example, whereas Corander et al. [12] report their algorithm took more than 3 days on an eight-variable instance, our Junctor program solves any eight-variable instance within 20 milliseconds. We also reported on the first evaluation of GOBNILP [20] for solving the problem, which highlighted the advantages of the dynamic programming approach. Acknowledgments This work was supported by the Academy of Finland, grant 276864. The authors thank Matti J¨arvisalo for useful discussions on constraint programming approaches to learning Markov networks. 8 References [1] N. Srebro. Maximum likelihood bounded tree-width Markov networks. Artificial Intelligence, 143(1):123– 138, 2003. [2] C. K. Chow and C. N. Liu. Approximating discrete probability distributions with dependence trees. IEEE Transactions on Information Theory, 14:462–467, 1968. [3] S. Della Pietra, V. J. Della Pietra, and J. D. Lafferty. Inducing features of random fields. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(4):380–393, 1997. [4] M. Narasimhan and J. A. Bilmes. PAC-learning bounded tree-width graphical models. In D. M. Chickering and J. Y. Halpern, editors, UAI, pages 410–417. AUAI Press, 2004. [5] P. Abbeel, D. Koller, and A. Y. Ng. Learning factor graphs in polynomial time and sample complexity. Journal of Machine Learning Research, 7:1743–1788, 2006. [6] A. Chechetka and C. Guestrin. Efficient principled learning of thin junction trees. In J. C. Platt, D. Koller, Y. Singer, and S. T. Roweis, editors, NIPS. Curran Associates, Inc., 2007. [7] J. Corander, M. Ekdahl, and T. Koski. Parallell interacting MCMC for learning of topologies of graphical models. Data Mining and Knowledge Discovery, 17(3):431–456, 2008. [8] G. Elidan and S. Gould. Learning bounded treewidth Bayesian networks. Journal of Machine Learning Research, 9:2699–2731, 2008. [9] F. Bromberg, D. Margaritis, and V. Honavar. Efficient Markov network structure discovery using independence tests. Journal of Artificial Intelligence Research, 35:449–484, 2009. [10] J. Davis and P. Domingos. Bottom-up learning of Markov network structure. In J. F¨urnkranz and T. Joachims, editors, ICML, pages 271–278. Omnipress, 2010. [11] J. Van Haaren and J. Davis. Markov network structure learning: A randomized feature generation approach. In J. Hoffmann and B. Selman, editors, AAAI, pages 1148–1154. AAAI Press, 2012. [12] J. Corander, T. Janhunen, J. Rintanen, H. J. Nyman, and J. Pensar. Learning chordal Markov networks by constraint satisfaction. In C. J. C. Burges, L. Bottou, Z. Ghahramani, and K. Q. Weinberger, editors, NIPS, pages 1349–1357, 2013. [13] J. Korhonen and P. Parviainen. Exact learning of bounded tree-width Bayesian networks. In C. M. Carvalho and P. Ravikumar, editors, AISTATS, volume 31 of JMLR Proceedings, pages 370–378. JMLR.org, 2013. [14] J. Berg, M. J¨arvisalo, and B. Malone. Learning optimal bounded treewidth Bayesian networks via maximum satisfiability. In S. Kaski and J. Corander, editors, AISTATS, pages 86–95. JMLR.org, 2014. [15] P. Parviainen, H. S. Farahani, and J. Lagergren. Learning bounded tree-width Bayesian networks using integer linear programming. In S. Kaski and J. Corander, editors, AISTATS, pages 751–759. JMLR.org, 2014. [16] S. Ott, S. Imoto, and S. Miyano. Finding optimal models for small gene networks. In R. B. Altman, A. K. Dunker, L. Hunter, and T. E. Klein, editors, PSB, pages 557–567. World Scientific, 2004. [17] M. Koivisto and K. Sood. Exact Bayesian structure discovery in Bayesian networks. Journal of Machine Learning Research, pages 549–573, 2004. [18] T. Silander and P. Myllym¨aki. A simple approach for finding the globally optimal Bayesian network structure. In R. Dechter and T. S. Richardson, editors, UAI, pages 445–452. AUAI Press, 2006. [19] C. Yuan and B. Malone. Learning optimal Bayesian networks: A shortest path perspective. Journal of Artificial Intelligence Research, 48:23–65, 2013. [20] M. Bartlett and J. Cussens. Advances in Bayesian network learning using integer programming. In UAI, pages 182–191. AUAI Press, 2013. [21] S. L. Lauritzen and D. J. Spiegelhalter. Local computations with probabilities on graphical structures and their application to expert systems. Journal of the Royal Statistical Society. Series B (Methodological), 50(2):pp. 157–224, 1988. [22] S. L. Lauritzen. Graphical Models. Oxford University Press, 1996. [23] A. P. Dawid and S. L. Lauritzen. Hyper Markov laws in the statistical analysis of decomposable graphical models. The Annals of Statistics, 21(3):1272–1317, 09 1993. [24] D. Heckerman, D. Geiger, and D. M. Chickering. Learning Bayesian networks: The combination of knowledge and statistical data. Machine Learning, 20:197–243, 1995. [25] C. P. de Campos and Q. Ji. Efficient structure learning of Bayesian networks using constraints. Journal of Machine Learning Research, 12:663–689, 2011. [26] K. Bache and M. Lichman. UCI machine learning repository, 2013. 9
|
2014
|
202
|
5,295
|
From MAP to Marginals: Variational Inference in Bayesian Submodular Models Josip Djolonga Department of Computer Science ETH Z¨urich josipd@inf.ethz.ch Andreas Krause Department of Computer Science ETH Z¨urich krausea@ethz.ch Abstract Submodular optimization has found many applications in machine learning and beyond. We carry out the first systematic investigation of inference in probabilistic models defined through submodular functions, generalizing regular pairwise MRFs and Determinantal Point Processes. In particular, we present L-FIELD, a variational approach to general log-submodular and log-supermodular distributions based on sub- and supergradients. We obtain both lower and upper bounds on the log-partition function, which enables us to compute probability intervals for marginals, conditionals and marginal likelihoods. We also obtain fully factorized approximate posteriors, at the same computational cost as ordinary submodular optimization. Our framework results in convex problems for optimizing over differentials of submodular functions, which we show how to optimally solve. We provide theoretical guarantees of the approximation quality with respect to the curvature of the function. We further establish natural relations between our variational approach and the classical mean-field method. Lastly, we empirically demonstrate the accuracy of our inference scheme on several submodular models. 1 Introduction Submodular functions [1] are a rich class of set functions F : 2V →R, investigated originally in game theory and combinatorial optimization. They capture natural notions such as diminishing returns and economies of scale. In recent years, submodular optimization has seen many important applications in machine learning, including active learning [2], recommender systems [3], document summarization [4], representation learning [5], clustering [6], the design of structured norms [7] etc. In this work, instead of using submodular functions to obtain point estimates through optimization, we take a Bayesian approach and define probabilistic models over sets (so called point processes) using submodular functions. Many of the aforementioned applications can be understood as performing MAP inference in such models. We develop L-FIELD, a general variational inference scheme for reasoning about log-supermodular (P(A) ∝exp(−F(A))) and log-submodular (P(A) ∝exp(F(A))) distributions, where F is a submodular set function. Previous work. There has been extensive work on submodular optimization (both approximate and exact minimization and maximization, see, e.g., [8, 9, 10, 11]). In contrast, we are unaware of previous work that addresses the general problem of probabilistic inference in Bayesian submodular models. There are two important special cases that have received significant interest. The most prominent examples are undirected pairwise Markov Random Fields (MRFs) with binary variables, also called the Ising model [12], due to their importance in statistical physics, and applications, e.g., in computer vision. While MAP inference is efficient for regular (log-supermodular) MRFs, computing the partition function is known to be #P-hard [13], and the approximation problem has been also shown to be hard [14]. Also, there is no FPRAS in the log-submodular case unless RP=NP [13]. An important case of log-submodular distributions is the Determinantal Point Process (DPP), used 1 in machine learning as a principled way of modeling diversity. Its partition function can be computed efficiently, and a 1 4-approximation scheme for finding the (NP-hard) MAP [15] is known. In this paper, we propose a variational inference scheme for general Bayesian submodular models, that encompasses these two and many other distributions, and has instance-dependent quality guarantees. A hallmark of the models is that they capture high-order interactions between many random variables. Existing variational approaches [16] cannot efficiently cope with such high-order interactions — they generally have to sum over all variables in a factor, scaling exponentially in the size of the factor. We discuss this prototypically for mean-field in Sec. 5. Our contributions. In summary, our main contributions are • We provide the first general treatment of probabilistic inference with log-submodular and log-supermodular distributions, that can capture high-order variable interactions. • We develop L-FIELD, a novel variational inference scheme that optimizes over sub- and supergradients of submodular functions. Our scheme yields both upper and lower bounds on the partition function, which imply rigorous probability intervals for marginals. We can also obtain factorial approximations of the distribution at no larger computational cost than performing MAP inference in the model (for which a plethora of algorithms are available). • We identify a natural link between our scheme and the well-known mean-field method. • We establish theoretical guarantees about the accuracy of our bounds, dependent on the curvature of the underlying submodular function. • We demonstrate the accuracy of L-FIELD on several Bayesian submodular models. 2 Submodular functions and optimization Submodular functions are set functions satisfying a diminishing returns condition. Formally, let V be some finite ground set, w.l.o.g. V = {1, . . . , n}, and consider a set function F : 2V →R. The marginal gain of adding item i ∈V to the set A ⊆V w.r.t. F is defined as F(i|A) = F(A ∪{i}) − F(A). Then, a function F : 2V →R is said to be submodular if for all A ⊆B ⊆V and i ∈V −B it holds that F(i|A) ≥F(i|B). A function F is called supermodular if −F is submodular. Without loss of generality1, we will also make the assumption that F is normalized so that F(∅) = 0. The problem of submodular function optimization has received significant attention. The (unconstrained) minimization of submodular functions, minA F(A), can be done in polynomial time. While general purpose algorithms [8] can be impractical due to their high order, several classes of functions admit faster, specialized algorithms, e.g. [17, 18, 19]. Many important problems can be cast as the minimization of a submodular objective, ranging from image segmentation [20, 12] to clustering [6]. Submodular maximization has also found numerous applications, e.g. experimental design [21], document summarization [4] or representation learning [5]. While this problem is in general NP-hard, effective constant-factor approximation algorithms exist (e.g. [22, 11]). In this paper we lift results from submodular optimization to probabilistic inference, which lets us quantify uncertainty about the solutions of the problem, instead of binding us to a single one. Our approach allows us to obtain (approximate) marginals at the same cost as traditional MAP inference. 3 Probabilistic inference in Bayesian submodular models Which Bayesian models are associated with submodular functions? Suppose F : 2V →R is a submodular set function. We consider distributions over subsets2 A ⊆V of the form P(A) = 1 Z e+F (A) and P(A) = 1 Z e−F (A), which we call log-submodular and log-supermodular, respectively. The normalizing quantity Z = P S⊆V e±F (S) is called the partition function, and −log Z is also known as free energy in the statistical physics literature. Note that distributions over subsets of V are isomorphic to distributions of |V | = n binary random variables X1, . . . , Xn ∈{0, 1} — we simply identify Xi as the indicator function of the event i ∈A, or formally Xi = [i ∈A]. Examples of log-supermodular distributions. There are many distributions that fit this framework. As a prominent example, consider binary pairwise Markov random fields (MRFs), 1The functions F(A) and F(A) + c encode the same distributions by virtue of normalization. 2In the appendix we also consider cardinality constraints, i.e., distributions over sets A that satisfy |A| ≤k. 2 P(X1, . . . , Xn) = 1 Z Q i,j φi,j(Xi, Xj). Assuming the potentials φi,j are positive, such MRFs are equivalent to distributions P(A) ∝exp(−F(A)), where F(A) = P i,j Fi,j(A), and Fi,j(A) = −log φi,j([i ∈A], [j ∈A]). An MRF is called regular iff each Fi,j is submodular (and consequently P(A) is log-supermodular). Such models are extensively used in applications, e.g. in computer vision [12]. More generally, a rich class of distributions can be defined using decomposable submodular functions, which can be written as sums of (usually simpler) submodular functions. As an example, let G1, . . . , Gk ⊆V be groups of elements and let φ1, . . . , φk : [0, ∞) →R be concave. Then, the function F(A) = Pk i=1 φi(|Gi ∩A|) is submodular. Models using these types of functions strictly generalize pairwise MRFs, and can capture higher-order variable interactions, which can be crucial in computer vision applications such as semantic segmentation (e.g. [23]). Examples of log-submodular distributions. A prominent example of log-submodular distributions are Determinantal Point Processes (DPPs) [24]. A DPP is a distribution over sets A of the form P(A) = 1 Z exp(F(A)), where F(A) = log |KA|. Here, K ∈RV ×V is a positive semi-definite matrix, KA is the square submatrix indexed by A, and | · | denotes the determinant. Because K is positive semi-definite, F(A) is known to be submodular, and hence DPPs are log-submodular. Another natural model is that of facility location. Assume that we have a set of locations V where we can open shops, and a set N of customers that we would like to serve. For each customer i ∈N and location j ∈V we have a non-negative number Ci,j quantifying how much service i gets from location j. Then, we consider F(A) = P i∈N maxj∈A Ci,j. We can also penalize the number of open shops and use a distribution P(A) ∝exp(F(A) −λ|A|) for λ > 0. Such objectives have been used for optimization in many applications, ranging from clustering [25] to recommender systems [26]. The Inference Challenge. Having introduced the models that we consider, we now show how to do inference in them3. Let us introduce the following operations that preserve submodularity. Definition 1. Let F : 2V →R be submodular and let X, Y ⊆V . Define the submodular functions F X as the restriction of F to 2X, and FX : 2V −X →R as FX(A) = F(A ∪X) −F(X). First, let us see how to compute marginals. The probability that the random subset S distributed as P(S = A) ∝exp(−F(A)) is in some non-empty lattice [X, Y ] = {A | X ⊆A ⊆Y } is equal to P(S ∈[X, Y ]) = 1 Z X X⊆A⊆Y exp(−F(A)) = 1 Z X A⊆Y −X exp(−F(X ∪A)) = e−F (X) ZY X Z , (1) where ZY X = P A⊆Y −X e−(F (X∪A)−F (X)) is the partition function of (FX)Y . Marginals P(i ∈S) of any i ∈V can be obtained using [{i}, V ]. We also obtain conditionals — if, for example, we condition on the event on (1), we have P(S = A|S ∈[X, Y ]) = exp(−F(A))/ZY X if A ∈[X, Y ], 0 otherwise. Note that log-supermodular distributions are conjugate with each other: for a logsupermodular prior P(A) ∝exp(−F(A)) and a likelihood function4 P(E | A) ∝exp(−L(E; A)), for which L is submodular w.r.t. A for each evidence E, the posterior P(A | E) ∝exp(−(F(A) + L(E; A))) is log-supermodular as well. The same holds for log-submodular distributions. 4 The variational approach In Section 3 we have seen that due to the closure properties of submodular functions, important inference tasks (e.g., marginals, conditioning) in Bayesian submodular models require computing partition functions of suitably defined/restricted submodular functions. Given that the general problem is #P hard, we seek approximate methods. The main idea is to exploit the peculiar property of submodular functions that they can be both lower- and upper-bounded using simple additive functions of the form s(A)+c, where c ∈R and s : 2V →R is modular, i.e. it satisfies s(A) = P i∈A s({i}). We will also treat modular functions s(·) as vectors s ∈RV with coordinates si = s({i}). Because modular functions have tractable log-partition functions, we obtain the following bounds. Lemma 1. If ∀A ⊆V : sl(A) + cl ≤F(A) ≤su(A) + cu for modular su, sl, and cl, cu ∈R, then log Z+(sl, cl) ≤ log P A⊆V exp(+F(A)) ≤ log Z+(su, cu) and log Z−(su, cu) ≤ log P A⊆V exp(−F(A)) ≤ log Z−(sl, cl), where log Z+(s, c) = c + P i∈V log(1 + esi) and log Z−(s, c) = −c + P i∈V log(1 + e−si). 3We consider log-supermodular distributions, as the log-submodular case is analogous. 4Such submodular loss functions L have been considered, e.g., in document summarization [4]. 3 We can use any modular (upper or lower) bound s(A) + c to define a completely factorized distribution that can be used as a proxy to approximate values of interest of the original distribution. For example, the marginal of i ∈A under Q(A) ∝exp(−s(A) + c) is easily seen to be 1/(1 + esi). Instead of optimizing over all possible bounds of the above form, we consider for each X ⊆V two sets of modular functions, which are exact at X and lower- or upper-bound F respectively. Similarly as for convex functions, we define [8][§6.2] the subdifferential of F at X as ∂F (X) = {s ∈Rn | ∀Y ⊆V : F(Y ) ≥F(X) + s(Y ) −s(X)}. (2) The superdifferential ∂F (X) is defined analogously by inverting the inequality sign [27]. For each subgradient s ∈∂F (X), the function gX(Y ) = s(Y ) + F(X) −s(X) is lower bounding F. Similarly, for a supergradient s ∈∂F (X), hX(Y ) = s(Y )+F(X)−s(X) is an upper bound of F. Note that both hX and gX are of the form that we considered (modular plus constant) and are tight at X, i.e. hX(X) = gX(X) = F(X). Because we will be optimizing over differentials, we define for any X ⊆V the shorthands Z+ X(s) = Z+(s, F(X) −s(X)) and Z− X(s) = Z−(s, F(X) −s(X)). 4.1 Optimizing over subgradients To analyze the problem of minimizing log Z− X(s) subject to s ∈∂F (X), we introduce the base polyhedron of F, defined as B(F) = {s ∈RV | s(V ) = F(V ) and ∀A ⊆V : s(A) ≤F(A)}, i.e. the set of modular lower bounds that are exact at V . As the following lemma shows, we do not have to consider log Z− X for all X and we can restrict our attention to the case X = ∅. Lemma 2. For all X ⊆V we have mins∈∂F (∅) Z− ∅(s) ≤mins∈∂F (X) Z− X(s). Moreover, the former problem is equivalent to minimize s X i∈V log(1 + e−si) subject to s ∈B(F). (3) Thus, we have to optimize a convex function over B(F), a problem that has been already considered [8, 9]. For example, we can use the Frank-Wolfe algorithm [28, 29], which is easy to implement and has a convergence rate of O( 1 k). It requires the optimization of linear functions g(s) = ⟨w, s⟩= wT s over the domain, which, as shown by Edmonds [1], can be done greedily in O(|V | log |V |) time. More precisely, to compute a maximizer s∗∈B(F) of g(s), pick a bijection σ : {1, . . . , |V |} →V that orders w, i.e. wσ(1) ≥wσ(2) ≥· · · ≥wσ(|V |). Then, set s∗ σ(i) = F(σ(i)|{σ(1), . . . , σ(i −1)}). Alternatively, if we can efficiently minimize the sum of the function plus a modular term, e.g. for the family of graph-cut representable functions [10], we can apply the divide-and-conquer algorithm [9][§9.1], which needs the minimization of O(|V |) problems. 1: procedure FRANK-WOLFE(F, x1, ϵ) 2: Define f(x) = log(1 + e−x) ▷Elementwise. 3: for k ←1, 2, . . . , T do 4: Pick s ∈argminx∈B(F )⟨x, ∇f(xk)⟩ 5: if ⟨xk −s, ∇f(xk)⟩≤ϵ then 6: return xk ▷Small duality gap. 7: else 8: xk+1 = (1 −γk)xk + γks; γk = 2 k+2 1: procedure DIVIDE-CONQUER(F) 2: s ←F (V ) |V | 1; A∗←minimizer of F(·) −s(·) 3: if F(A∗) = s(A∗) then 4: return s 5: else 6: sA ←DIVIDE-CONQUER(F A) 7: sV −A ←DIVIDE-CONQUER(FA) 8: return (sA, sV −A) The entropy viewpoint and the Fenchel dual. Interestingly, (3) can be interpreted as a maximum entropy problem. Recall that, for s ∈B(F) we use the distribution P(A) ∝exp(−s(A)), whose entropy is exactly the negative of our objective. Hence, we can consider Problem (3) as that of maximizing the entropy over the set of factorized distributions with parameters in −B(F). We can go back to the standard representation using the marginals p via pi = 1/(1+exp(si)). This becomes obvious if we consider the Fenchel dual of the problem, which, as discussed in §5, allows us to make connections with the classical mean-field approach. To this end, we introduce the Lov`asz extension, defined for any F : 2V →R as the support function over B(F), i.e. f(p) = sups∈B(F ) sT p [30]. Let us also define for p ∈[0, 1]V by H[p] the Shannon entropy of a vector of |V | independent Bernoulli random variables with success probabilities p. 4 Lemma 3. The Fenchel dual problem of Problem (3) is maximize p∈[0,1]V H[p] −f(p). (4) Moreover, there is zero duality gap, and the pair (s∗, p∗) is primal-dual optimal if and only if p∗= 1 1 + exp(s∗ i ), . . . , 1 1 + exp(s∗n) and f(p∗) = p∗T s∗. (5) From the discussion above, it can be easily seen that the Fenchel dual reparameterizes the problem from the parameters −s to the marginals p. Note that the dual lets us provide a certificate of optimality, as the Lov´asz extension can be computed with Edmonds’ greedy algorithm. 4.2 Optimizing over supergradients To optimize over subgradients, we pick for each set X ⊆V a representative supergradient and optimize over all X. As in [27], we consider the following supergradients, elements of ∂F (X). Grow supergradient ˆsX Shrink supergradient ˇsX Bar supergradient sX i ∈X ˆsX({i}) = F(i|V −{i}) ˇsX({i}) = F(i|X −{i}) sX({i}) = F(i|V −{i}) i /∈X ˆsX({i}) = F(i|X) ˇsX({i}) = F({i}) sX({i}) = F({i}) Optimizing the bound over bar supergradients requires the minimization of the original function plus a modular term. As already mentioned for the divide-and-conquer strategy above, we can do this efficiently for several problems. The exact formulation of the problem is presented below. Lemma 4. Define the modular functions m1({i}) = log(1 + e−F (i|V −i)) −log(1 + eF (i)), and m2({i}) = log(1 + eF (i|V −i)) −log(1 + e−F (i)). The following pairs of problems are equivalent. minimizeX log Z+ X(sX) ≡ minimizeX F(X) + m1(X) maximizeX log Z− X(sX) ≡ minimizeX F(X) −m2(X) Even though we cannot optimize over grow and shrink supergradients, we can evaluate all three at the optimum for the problems above and pick the one that gives the best bound. 5 Mean-field methods and the multi-linear extension Is there a relation to traditional variational methods? If Q(·) is a distribution over subsets of V , then 0 ≤KL(Q || P) = EQ h log Q(S) P(S) i = log Z + EQ h log Q(S) exp(−F(S)) i = log Z −H[Q] + EQ[F], which yields the bound log Z ≥H[Q] −EQ[F]. The mean-field method restricts Q to be a completely factorized distribution, so that elements are picked independently and Q can be described by the vector of marginals q ∈[0, 1]V , over which it is then optimized. Compare this with our approach. Mean-Field Objective Our Objective: L-FIELD maximizeq∈[0,1]V H[q] −Eq[F] maximizeq∈[0,1]V H[q] −f(q) ▷Non-concave, can be hard to evaluate. ▷Concave, efficient to evaluate. Both the Lov´asz extension f(q) and the multi-linear extension ˜f(q) = Eq[F] are continuous extensions of F, introduced for submodular minimization [30] and maximization [31], respectively. The former agrees with the convex envelope of F and can be efficiently evaluated (in O(|V |) evaluations of F) using Edmonds’ greedy algorithm (cf., §4.1, [1]). In contrast, evaluating ˜f(q) = Eq[F] = P A⊆V Q i q[i∈A] i (1 −qi)[i/∈A]F(A) in general requires summing over exponentially many terms – a problem potentially as hard as the original inference problem! Even if ˜f(q) is approximated by sampling, it is neither convex nor concave. Moreover, computing the coordinate ascent updates of mean-field can be intractable for general F. Hence, our approach can be motivated as follows: instead of using the multi-linear extension ˜f, we use the Lov´asz extension f of F, which makes the problem convex and tractable. This analogy motivated the name L-FIELD (L for Lov´asz). 5 6 Curvature-dependent approximation bounds How accurate are the bounds obtained via our variational approach? We now provide theoretical guarantees on the approximation quality as a function of the curvature of F, which quantifies how far the function is from modularity. Curvature is defined for polymatroid functions, which are normalized non-decreasing submodular functions, i.e., a submodular function F : 2V →R is polymatroid if for all A ⊆B ⊆V it holds that F(A) ≤F(B). Definition 2 (From [32]). Let G : 2V →R be a polymatroid function. The curvature κ of G is defined as 5 κ = 1 −mini∈V : G({i})>0 G(i|V −{i}) G({i}) . The curvature is always between 0 and 1 and is equal to 0 if and only if the function is modular. Although the curvature is a notion for polymatroid functions, we can still show results for the general case as any submodular function F can be decomposed [33] as the sum of a modular term m(·) defined as m({i}) = F(i|V −{i}) and G = F −m, which is a polymatroid function. Our bounds below depend on the curvature of G and GMAX = G(V ) = F(V ) −P i∈V F(i|V −i). Theorem 1. Let F = G+m, where G is polymatroid with curvature κ and m is modular defined as above. Pick any bijection σ : V →{1, 2, . . . , |V |} and define sets Sσ 0 = ∅, Sσ i = {σ(1), . . . , σ(i)}. If we define s: sσ(i) = G(Sσ i ) −G(Sσ i−1), then s + m ∈∂F (∅) and the following inequalities hold. log Z−(s + m, 0) −log X A⊆V exp(−F(A)) ≤κGMAX (6) log X A⊆V exp(+F(A)) −log Z+(s + m, 0) ≤κGMAX (7) Theorem 2. Under the same assumptions as in Theorem 1, if we define the modular function s(·) by s(A) = P i∈A G({i}), then s + m ∈∂F (∅) and the following inequalities hold. log X A⊆V exp(−F(A)) −log Z−(s + m, 0) ≤ κ(n −1) 1 + (n −1)(1 −κ)GMAX ≤ κ 1 −κGMAX (8) log Z+(s + m, 0) −log X A⊆V exp(+F(A)) ≤ κ(n −1) 1 + (n −1)(1 −κ)GMAX ≤ κ 1 −κGMAX (9) Note that we establish bounds for specific sub-/supergradients. Since our variational scheme considers these in the optimization as well, the same quality guarantees hold for the optimized bounds. Further, note that we get a dependence on the range of the function via GMAX. However, if we consider αF for large α > 1, most of the mass will be concentrated at the MAP (assuming it is unique). In this case, L-FIELD also performs well, as it can always choose gradients that are tight at the MAP. When we optimize over supergradients, all possible tight sets are considered. Similarly, the subgradients are optimized over B(F), and for any X ⊆V there exists some sX ∈B(F) tight at X. 7 Experiments Our experiments6 aim to address four main questions: (1) How large is the gap between the upperand lower-bounds for the log-partition function and the marginals? (2) How accurate are the factorized approximations obtained from a single MAP-like optimization problem? (3) How does the accuracy depend on the amount of evidence (i.e., concentration of the posterior), the curvature of the function, and the type of Bayesian submodular model considered? (4) How does L-FIELD compare to mean-field on problems where the latter can be applied? We consider approximate marginals obtained from the following methods: lower/upper: obtained from the factorized distributions associated with the modular lower/upper bounds; lower-/upperbound: the lower/upper bound of the estimated probability interval. All of the functions we consider are graph-representable [17], which allows us to perform the optimization over superdifferentials using a single graph cut and use the exact divide-and-conquer algorithm. We used the min-cut 5We differ from the convention to remove i ∈V s.t. G({i}) = 0. Please see the appendix for a discussion. 6The code will be made available at http://las.ethz.ch. 6 implementation from [34]. Since the update equations are easily computable, we have also implemented mean-field for the first experiment. For the other two experiments computing the updates requires exhaustive enumeration and is intractable. The results are shown on Figure 1 and the experiments are explained below. We plot the averages of several repetitions of the experiments. Note that computing intervals for marginals requires two MAP-like optimizations per variable; hence we focus on small problems with |V | = 100. We point out that obtaining a single factorized approximation (as produced, e.g., by mean-field), only requires a single MAP-like optimization, which can be done for more than 270,000 variables [19]. Log-supermodular: Cuts / Pairwise MRFs. Our first experiment evaluates L-FIELD on a sequence of distributions that are increasingly more concentrated. Motivated by applications in semisupervised learning, we sampled data from a 2-dimensional Gaussian mixture model with 2 clusters. The centers were sampled from N([3, 3], I) and N([−3, −3], I) respectively. For each cluster, we sampled n = 50 points from a bivariate normal. These 2n points were then used as nodes to create a graph with weight between points x and x′ equal to e−||x−x′||. As prior we chose P(A) ∝exp(−F(A)), where F is the cut function in this graph, hence P(A) is a regular MRF. Then, for k = 1, . . . , n we consider the conditional distribution on the event that k points from the first cluster are on one side of the cut and k points from the other cluster are on the other side. As we provide more evidence, the posterior concentrates, and the intervals for both the log-partition function and marginals shrink. Compared with ground truth, the estimates of the marginal probabilities improve as well. Due to non-convexity, mean-field occasionally gets stuck in local optima, resulting in very poor marginals. To prevent this, we chose the best run out of 20 random restarts. These best runs produced slightly better marginals than L-FIELD for this model, at the cost of less robustness. Log-supermodular: Decomposable functions. Our second experiment assesses the performance as a function of the curvature of F. It is motivated by a problem in outbreak detection on networks. Assume that we have a graph G = (V, E) and some of its nodes E ⊆V have been infected by some contagious process. Instead of E, we observe a noisy set N ⊆V , corrupted with a false positive rate of 0.1 and a false negative rate of 0.2. We used a log-supermodular prior P(A) ∝ exp −P v∈V |Nv∩A| |Nv| µ , where µ ∈[0, 1] and Nv is the union of v and its neighbors. This prior prefers smaller sets and sets that are more clustered on the graph. Note that µ controls the preference of clustered nodes and affects the curvature. We sampled random graphs with 100 nodes from a Watts-Strogatz model and obtained E by running an independent cascade starting from 2 random nodes. Then, for varying µ, we consider the posterior, which is log-supermodular, as the noise model results in a modular likelihood. As the curvature increases, the intervals for both the log-partition function and marginals decrease as expected. Surprisingly, the marginals are very accurate (< 0.1 average error) even for very large curvature. This suggests that our curvature dependent bounds are very conservative, and much better performance can be expected in practice. Log-submodular: Facility location modeling. Our last experiment evaluates how accurate LFIELD is when quantifying uncertainty in submodular maximization tasks. Concretely, we consider the problem of sensor placement in water distribution networks, which can be modeled as submodular maximization [35]. More specifically, we have a water distribution network and there are some junctions V where we can put sensors that can detect contaminated water. We also have a set I of contamination scenarios. For each i ∈I and j ∈V we have a utility Ci,j ∈[0, 1], that comes from real data [35]. Moreover, as the sensors are expensive, we would like to use as few as possible. We use the facility-location model, more precisely P(S = A) ∝exp(F(A) −2|A|), with F(A) = P i∈N maxj∈A Ci,j. Instead of optimizing for a fixed placement, here we consider the problem of sampling from P in order to quantify the uncertainty in the optimization task. We used the following sampling strategy. We consider nodes v ∈V in some order. We then sample a Bernoulli Z with probability P(Z = 1) = qv based on the factorized distribution q from the modular upper bound. We then condition on v ∈S if Z = 1, or v /∈S if Z = 0. In the computation of the lower bound we used the subgradient sg computed from the greedy order of V — the i-th element in this order v1, . . . , vn is the one that gives the highest improvement when added to the set formed by the previous i −1 elements. Then, sg ∈∂F (∅): sg i = F(vi|{v0, . . . , vi−1}). We repeated the experiment several times using randomly sampled 500 contamination scenarios and 100 locations from a larger dataset. Note that our approximations get better as we condition on more information (i.e., proceed through the iterations of the sampling procedure above). Also note that even from the very beginning, the marginals are very accurate (< 0.1 average error). 7 0 10 20 30 40 50 0 50 100 150 Number of Conditioned Pairs Log-Partition Function Lower Upper Mean-Field (a) [CT] — Logp. Bounds 0 10 20 30 40 50 0 0.2 0.4 0.6 0.8 1 Number of Conditioned Pairs Average Gap — Upper-Lower Bound (b) [CT] — Prob. Interval Gap 2 4 6 8 0 0.2 0.4 0.6 Number of Conditioned Pairs Mean Absolute Error of Marginals Lower Upper Lower-Bound Upper-Bound Mean-Field (c) [CT] — Mean Error on Marginals 0 0.2 0.4 0.6 0.8 1 0 10 20 30 1-Curvature Log-Partition Function Lower Upper (d) [NW] — Logp. Bounds 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 1-Curvature Average Gap — Upper-Lower Bound (e) [NW] — Prob. Interval Gap 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 1-Curvature Mean Absolute Error of Marginals Lower Upper Lower-Bound Upper-Bound (f) [NW] — Mean Error on Marginals 0 20 40 60 80 100 0 20 40 60 80 100 Iteration Log-Partition Function Lower Upper (g) [SP] — Logp. Bounds 0 20 40 60 80 100 0 0.2 0.4 0.6 0.8 1 Iteration Average Gap — Upper-Lower Bound (h) [SP] — Prob. Interval Gap 0 5 10 15 20 0 0.2 0.4 0.6 Iteration Mean Absolute Error of Marginals Lower Upper Lower-Bound Upper-Bound (i) [SP] — Mean Error on Marginals Figure 1: Experiments on [CT] Cuts (a-c), [NW] network detection (d-f), [SP] sensor placement (g-i). Note that to generate (c,f,i) we had to compute the exact marginals by exhaustive enumeration. Hence, these three graphs were created using a smaller ground set of size 20. The error bars capture 3 standard errors. 8 Conclusion We proposed L-FIELD, the first variational method for approximate inference in general Bayesian submodular and supermodular models. Our approach has several attractive properties: It produces rigorous upper and lower bounds on the log-partition function and on marginal probabilities. These bounds can be optimized efficiently via convex and submodular optimization. Accurate factorial approximations can be obtained at the same computational cost as performing MAP inference in the underlying model, a problem for which a vast array of scalable methods are available. Furthermore, we identified a natural connection to the traditional mean-field method and bounded the quality of our approximations with the curvature of the function. Our experiments demonstrate the accuracy of our inference scheme on several natural examples of Bayesian submodular models. We believe that our results present a significant step in understanding the role of submodularity – so far mainly considered for optimization – in approximate Bayesian inference. Furthermore, L-FIELD presents a significant advance in our ability to perform probabilistic inference in models with complex, highorder dependencies, which present a major challenge for classical techniques. Acknowledgments. This research was supported in part by SNSF grant 200021 137528, ERC StG 307036 and a Microsoft Research Faculty Fellowship. References [1] J. Edmonds. “Submodular functions, matroids, and certain polyhedra”. In: Combinatorial structures and their applications (1970), pp. 69–87. [2] D. Golovin and A. Krause. “Adaptive Submodularity: Theory and Applications in Active Learning and Stochastic Optimization”. In: Journal of Artificial Intelligence Research (JAIR) 42 (2011), pp. 427–486. 8 [3] Y. Yue and C. Guestrin. “Linear Submodular Bandits and its Application to Diversified Retrieval”. In: Neural Information Processing Systems (NIPS). 2011. [4] H. Lin and J. Bilmes. “A class of submodular functions for document summarization”. In: 49th Annual Meeting of the Association for Computational Linguistics: HLT. 2011, pp. 510–520. [5] V. Cevher and A. Krause. “Greedy Dictionary Selection for Sparse Representation”. In: IEEE Journal of Selected Topics in Signal Processing 99.5 (2011), pp. 979–988. [6] M. Narasimhan, N. Jojic, and J. Bilmes. “Q-clustering”. In: NIPS. Vol. 5. 10.10. 2005, p. 5. [7] F. Bach. “Structured sparsity-inducing norms through submodular functions.” In: NIPS. 2010. [8] S. Fujishige. Submodular functions and optimization. Vol. 58. Annals of Discrete Mathematics. 2005. [9] F. Bach. “Learning with submodular functions: a convex optimization perspective”. In: Foundations and Trends R⃝in Machine Learning 6.2-3 (2013), pp. 145–373. ISSN: 1935-8237. [10] S. Jegelka, H. Lin, and J. A. Bilmes. “On fast approximate submodular minimization.” In: NIPS. 2011. [11] N. Buchbinder, M. Feldman, J. Naor, and R. Schwartz. “A tight linear time (1/2)-approximation for unconstrained submodular maximization”. In: Foundations of Computer Science (FOCS). 2012. [12] Y. Boykov, O. Veksler, and R. Zabih. “Fast approximate energy minimization via graph cuts”. In: Pattern Analysis and Machine Intelligence, IEEE Transactions on 23.11 (2001), pp. 1222–1239. [13] M. Jerrum and A. Sinclair. “Polynomial-time approximation algorithms for the Ising model”. In: SIAM Journal on computing 22.5 (1993), pp. 1087–1116. [14] L. A. Goldberg and M. Jerrum. “The complexity of ferromagnetic Ising with local fields”. In: Combinatorics, Probability and Computing 16.01 (2007), pp. 43–61. [15] J. Gillenwater, A. Kulesza, and B. Taskar. “Near-Optimal MAP Inference for Determinantal Point Processes”. In: Proc. Neural Information Processing Systems (NIPS). 2012. [16] M. J. Wainwright and M. I. Jordan. “Graphical Models, Exponential Families, and Variational Inference”. In: Found. Trends Mach. Learn. 1.1-2 (2008), pp. 1–305. [17] V. Kolmogorov and R. Zabin. “What energy functions can be minimized via graph cuts?” In: Pattern Analysis and Machine Intelligence, IEEE Transactions on 26.2 (2004), pp. 147–159. [18] P. Stobbe and A. Krause. “Efficient Minimization of Decomposable Submodular Functions”. In: Proc. Neural Information Processing Systems (NIPS). 2010. [19] S. Jegelka, F. Bach, and S. Sra. “Reflection methods for user-friendly submodular optimization”. In: Advances in Neural Information Processing Systems. 2013, pp. 1313–1321. [20] S. Jegelka and J. Bilmes. “Submodularity beyond submodular energies: coupling edges in graph cuts”. In: Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on. 2011, pp. 1897–1904. [21] A. Krause and C. Guestrin. “Near-optimal Nonmyopic Value of Information in Graphical Models”. In: Conference on Uncertainty in Artificial Intelligence (UAI). 2005. [22] A. Krause and D. Golovin. “Submodular Function Maximization”. In: Tractability: Practical Approaches to Hard Problems (to appear). Cambridge University Press, 2014. [23] P. Kohli, L. Ladick´y, and P. H. Torr. “Robust higher order potentials for enforcing label consistency”. In: International Journal of Computer Vision 82.3 (2009), pp. 302–324. [24] A. Kulesza and B. Taskar. “Determinantal Point Processes for Machine Learning”. In: Foundations and Trends in Machine Learning 5.2–3 (2012). [25] R. Gomes and A. Krause. “Budgeted Nonparametric Learning from Data Streams”. In: ICML. 2010. [26] K. El-Arini, G. Veda, D. Shahaf, and C. Guestrin. “Turning down the noise in the blogosphere”. In: Proc. ACM SIGKDD International Conference on Knowledge Discovery and Data mining. 2009. [27] R. Iyer, S. Jegelka, and J. Bilmes. “Fast Semidifferential-based Submodular Function Optimization”. In: ICML (3). 2013, pp. 855–863. [28] M. Frank and P. Wolfe. “An algorithm for quadratic programming”. In: Naval Research Logistics Quarterly 3.1-2 (1956), pp. 95–110. ISSN: 1931-9193. [29] M. Jaggi. “Revisiting Frank-Wolfe: Projection-free sparse convex optimization”. In: 30th International Conference on Machine Learning (ICML-13). 2013, pp. 427–435. [30] L. Lov´asz. “Submodular functions and convexity”. In: Mathematical Programming The State of the Art. Springer, 1983, pp. 235–257. [31] G. Calinescu, C. Chekuri, M. P´al, and J. Vondr´ak. “Maximizing a submodular set function subject to a matroid constraint”. In: Integer programming and combinatorial optimization. Springer, 2007. [32] M. Conforti and G. Cornuejols. “Submodular set functions, matroids and the greedy algorithm: tight worst-case bounds and some generalizations of the Rado-Edmonds theorem”. In: Discrete applied mathematics 7.3 (1984), pp. 251–274. [33] W. H. Cunningham. “Decomposition of submodular functions”. In: Combinatorica 3.1 (1983). [34] Y. Boykov and V. Kolmogorov. “An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision”. In: Pattern Analysis and Machine Intelligence, IEEE Trans. on 26.9 (2004). [35] A. Krause, J. Leskovec, C. Guestrin, J. VanBriesen, and C. Faloutsos. “Efficient Sensor Placement Optimization for Securing Large Water Distribution Networks”. In: Journal of Water Resources Planning and Management 134.6 (2008), pp. 516–526. 9
|
2014
|
203
|
5,296
|
Algorithms for CVaR Optimization in MDPs Yinlam Chow∗ Institute of Computational & Mathematical Engineering, Stanford University Mohammad Ghavamzadeh† Adobe Research & INRIA Lille - Team SequeL Abstract In many sequential decision-making problems we may want to manage risk by minimizing some measure of variability in costs in addition to minimizing a standard criterion. Conditional value-at-risk (CVaR) is a relatively new risk measure that addresses some of the shortcomings of the well-known variance-related risk measures, and because of its computational efficiencies has gained popularity in finance and operations research. In this paper, we consider the mean-CVaR optimization problem in MDPs. We first derive a formula for computing the gradient of this risk-sensitive objective function. We then devise policy gradient and actor-critic algorithms that each uses a specific method to estimate this gradient and updates the policy parameters in the descent direction. We establish the convergence of our algorithms to locally risk-sensitive optimal policies. Finally, we demonstrate the usefulness of our algorithms in an optimal stopping problem. 1 Introduction A standard optimization criterion for an infinite horizon Markov decision process (MDP) is the expected sum of (discounted) costs (i.e., finding a policy that minimizes the value function of the initial state of the system). However in many applications, we may prefer to minimize some measure of risk in addition to this standard optimization criterion. In such cases, we would like to use a criterion that incorporates a penalty for the variability (due to the stochastic nature of the system) induced by a given policy. In risk-sensitive MDPs [16], the objective is to minimize a risk-sensitive criterion such as the expected exponential utility [16], a variance-related measure [24, 14], or the percentile performance [15]. The issue of how to construct such criteria in a manner that will be both conceptually meaningful and mathematically tractable is still an open question. Although most losses (returns) are not normally distributed, the typical Markowitz mean-variance optimization [18], that relies on the first two moments of the loss (return) distribution, has dominated the risk management for over 50 years. Numerous alternatives to mean-variance optimization have emerged in the literature, but there is no clear leader amongst these alternative risk-sensitive objective functions. Value-at-risk (VaR) and conditional value-at-risk (CVaR) are two promising such alternatives that quantify the losses that might be encountered in the tail of the loss distribution, and thus, have received high status in risk management. For (continuous) loss distributions, while VaRα measures risk as the maximum loss that might be incurred w.r.t. a given confidence level α, CVaRα measures it as the expected loss given that the loss is greater or equal to VaRα. Although VaR is a popular risk measure, CVaR’s computational advantages over VaR has boosted the development of CVaR optimization techniques. We provide the exact definitions of these two risk measures and briefly discuss some of the VaR’s shortcomings in Section 2. CVaR minimization was first developed by Rockafellar and Uryasev [23] and its numerical effectiveness was demonstrated in portfolio optimization and option hedging problems. Their work was then extended to objective functions consist of different combinations of the expected loss and the CVaR, such as the minimization of the expected loss subject to a constraint on CVaR. This is the objective function ∗Part of the work is completed during Yinlam Chow’s internship at Adobe Research. †Mohammad Ghavamzadeh is at Adobe Research, on leave of absence from INRIA Lille - Team SequeL. 1 that we study in this paper, although we believe that our proposed algorithms can be easily extended to several other CVaR-related objective functions. Boda and Filar [9] and B¨auerle and Ott [20, 3] extended the results of [23] to MDPs (sequential decision-making). While the former proposed to use dynamic programming (DP) to optimize CVaR, an approach that is limited to small problems, the latter showed that in both finite and infinite horizon MDPs, there exists a deterministic historydependent optimal policy for CVaR optimization (see Section 3 for more details). Most of the work in risk-sensitive sequential decision-making has been in the context of MDPs (when the model is known) and much less work has been done within the reinforcement learning (RL) framework. In risk-sensitive RL, we can mention the work by Borkar [10, 11] who considered the expected exponential utility and those by Tamar et al. [26] and Prashanth and Ghavamzadeh [17] on several variance-related risk measures. CVaR optimization in RL is a rather novel subject. Morimura et al. [19] estimate the return distribution while exploring using a CVaR-based risksensitive policy. Their algorithm does not scale to large problems. Petrik and Subramanian [22] propose a method based on stochastic dual DP to optimize CVaR in large-scale MDPs. However, their method is limited to linearly controllable problems. Borkar and Jain [12] consider a finitehorizon MDP with CVaR constraint and sketch a stochastic approximation algorithm to solve it. Finally, Tamar et al. [27] have recently proposed a policy gradient algorithm for CVaR optimization. In this paper, we develop policy gradient (PG) and actor-critic (AC) algorithms for mean-CVaR optimization in MDPs. We first derive a formula for computing the gradient of this risk-sensitive objective function. We then propose several methods to estimate this gradient both incrementally and using system trajectories (update at each time-step vs. update after observing one or more trajectories). We then use these gradient estimations to devise PG and AC algorithms that update the policy parameters in the descent direction. Using the ordinary differential equations (ODE) approach, we establish the asymptotic convergence of our algorithms to locally risk-sensitive optimal policies. Finally, we demonstrate the usefulness of our algorithms in an optimal stopping problem. In comparison to [27], while they develop a PG algorithm for CVaR optimization in stochastic shortest path problems that only considers continuous loss distributions, uses a biased estimator for VaR, is not incremental, and has no comprehensive convergence proof, here we study mean-CVaR optimization, consider both discrete and continuous loss distributions, devise both PG and (several) AC algorithms (trajectory-based and incremental – plus AC helps in reducing the variance of PG algorithms), and establish convergence proof for our algorithms. 2 Preliminaries We consider problems in which the agent’s interaction with the environment is modeled as a MDP. A MDP is a tuple M = (X, A, C, P, P0), where X = {1, . . . , n} and A = {1, . . . , m} are the state and action spaces; C(x, a) ∈[−Cmax, Cmax] is the bounded cost random variable whose expectation is denoted by c(x, a) = E C(x, a) ; P(·|x, a) is the transition probability distribution; and P0(·) is the initial state distribution. For simplicity, we assume that the system has a single initial state x0, i.e., P0(x) = 1{x = x0}. All the results of the paper can be easily extended to the case that the system has more than one initial state. We also need to specify the rule according to which the agent selects actions at each state. A stationary policy µ(·|x) is a probability distribution over actions, conditioned on the current state. In policy gradient and actor-critic methods, we define a class of parameterized stochastic policies µ(·|x; θ), x ∈X, θ ∈Θ ⊆Rκ1 , estimate the gradient of a performance measure w.r.t. the policy parameters θ from the observed system trajectories, and then improve the policy by adjusting its parameters in the direction of the gradient. Since in this setting a policy µ is represented by its κ1-dimensional parameter vector θ, policy dependent functions can be written as a function of θ in place of µ. So, we use µ and θ interchangeably in the paper. We denote by dµ γ(x|x0) = (1 −γ) P∞ k=0 γkP(xk = x|x0 = x0; µ) and πµ γ (x, a|x0) = dµ γ(x|x0)µ(a|x) the γ-discounted visiting distribution of state x and state-action pair (x, a) under policy µ, respectively. Let Z be a bounded-mean random variable, i.e., E[|Z|] < ∞, with the cumulative distribution function F(z) = P(Z ≤z) (e.g., one may think of Z as the loss of an investment strategy µ). We define the value-at-risk at the confidence level α ∈(0, 1) as VaRα(Z) = min z | F(z) ≥α . Here the minimum is attained because F is non-decreasing and right-continuous in z. When F is continuous and strictly increasing, VaRα(Z) is the unique z satisfying F(z) = α, otherwise, the VaR equation can have no solution or a whole range of solutions. Although VaR is a popular risk measure, it suffers from being unstable and difficult to work with numerically when Z is not 2 normally distributed, which is often the case as loss distributions tend to exhibit fat tails or empirical discreteness. Moreover, VaR is not a coherent risk measure [1] and more importantly does not quantify the losses that might be suffered beyond its value at the α-tail of the distribution [23]. An alternative measure that addresses most of the VaR’s shortcomings is conditional value-at-risk, CVARα(Z), which is the mean of the α-tail distribution of Z. If there is no probability atom at VaRα(Z), CVaRα(Z) has a unique value that is defined as CVaRα(Z) = E Z | Z ≥VaRα(Z) . Rockafellar and Uryasev [23] showed that CVaRα(Z) = min ν∈R Hα(Z, ν) △= min ν∈R n ν + 1 1 −αE (Z −ν)+o . (1) where (x)+ = max(x, 0) represents the positive part of x. Note that as a function of ν, Hα(·, ν) is finite and convex (hence continuous). 3 CVaR Optimization in MDPs For a policy µ, we define the loss of a state x (state-action pair (x, a)) as the sum of (discounted) costs encountered by the agent when it starts at state x (state-action pair (x, a)) and then follows policy µ, i.e., Dθ(x) = P∞ k=0 γkC(xk, ak) | x0 = x, µ and Dθ(x, a) = P∞ k=0 γkC(xk, ak) | x0 = x, a0 = a, µ. The expected value of these two random variables are the value and action-value functions of policy µ, i.e., V θ(x) = E Dθ(x) and Qθ(x, a) = E Dθ(x, a) . The goal in the standard discounted formulation is to find an optimal policy θ∗= argminθ V θ(x0). For CVaR optimization in MDPs, we consider the following optimization problem: For a given confidence level α ∈(0, 1) and loss tolerance β ∈R, min θ V θ(x0) subject to CVaRα Dθ(x0) ≤β. (2) By Theorem 16 in [23], the optimization problem (2) is equivalent to (Hα is defined by (1)) min θ,ν V θ(x0) subject to Hα Dθ(x0), ν ≤β. (3) To solve (3), we employ the Lagrangian relaxation procedure [4] to convert it to the following unconstrained problem: max λ≥0 min θ,ν L(θ, ν, λ) △= V θ(x0) + λ Hα Dθ(x0), ν −β , (4) where λ is the Lagrange multiplier. The goal here is to find the saddle point of L(θ, ν, λ), i.e., a point (θ∗, ν∗, λ∗) that satisfies L(θ, ν, λ∗) ≥L(θ∗, ν∗, λ∗) ≥L(θ∗, ν∗, λ), ∀θ, ν, ∀λ ≥0. This is achieved by descending in (θ, ν) and ascending in λ using the gradients of L(θ, ν, λ) w.r.t. θ, ν, and λ, i.e.,1 ∇θL(θ, ν, λ) = ∇θV θ(x0) + λ (1 −α)∇θE h Dθ(x0) −ν +i , (5) ∂νL(θ, ν, λ) = λ 1 + 1 (1 −α)∂νE h Dθ(x0) −ν +i ∋λ 1 − 1 (1 −α)P Dθ(x0) ≥ν , (6) ∇λL(θ, ν, λ) = ν + 1 (1 −α)E h Dθ(x0) −ν +i −β. (7) We assume that there exists a policy µ(·|·; θ) such that CVaRα Dθ(x0) ≤β (feasibility assumption). As discussed in Section 1, B¨auerle and Ott [20, 3] showed that there exists a deterministic history-dependent optimal policy for CVaR optimization. The important point is that this policy does not depend on the complete history, but only on the current time step k, current state of the system xk, and accumulated discounted cost Pk i=0 γiC(xi, ai). In the following, we present a policy gradient (PG) algorithm (Sec. 4) and several actor-critic (AC) algorithms (Sec. 5) to optimize (4). While the PG algorithm updates its parameters after observing several trajectories, the AC algorithms are incremental and update their parameters at each time-step. 1The notation ∋in (6) means that the right-most term is a member of the sub-gradient set ∂νL(θ, ν, λ). 3 4 A Trajectory-based Policy Gradient Algorithm In this section, we present a policy gradient algorithm to solve the optimization problem (4). The unit of observation in this algorithm is a system trajectory generated by following the current policy. At each iteration, the algorithm generates N trajectories by following the current policy, use them to estimate the gradients in Eqs. 5-7, and then use these estimates to update the parameters θ, ν, λ. Let ξ = {x0, a0, x1, a1, . . . , xT −1, aT −1, xT } be a trajectory generated by following the policy θ, where x0 = x0 and xT is usually a terminal state of the system. After xk visits the terminal state, it enters a recurring sink state xS at the next time step, incurring zero cost, i.e., C(xS, a) = 0, ∀a ∈A. Time index T is referred to as the stopping time of the MDP. Since the transition is stochastic, T is a non-deterministic quantity. Here we assume that the policy µ is proper, i.e., P∞ k=0 P(xk = x|x0 = x0, µ) < ∞for every x ̸∈{xS}. This further means that with probability 1, the MDP exits the transient states and hits xS (and stays in xS) in finite time T. For simplicity, we assume that the agent incurs zero cost at the terminal state. Analogous results for the general case with a non-zero terminal cost can be derived using identical arguments. The loss and probability of ξ are defined as D(ξ) = PT −1 k=0 γkc(xk, ak) and Pθ(ξ) = P0(x0) QT −1 k=0 µ(ak|xk; θ)P(xk+1|xk, ak), respectively. It can be easily shown that ∇θ log Pθ(ξ) = PT −1 k=0 ∇θ log µ(ak|xk; θ). Algorithm 1 contains the pseudo-code of our proposed policy gradient algorithm. What appears inside the parentheses on the right-hand-side of the update equations are the estimates of the gradients of L(θ, ν, λ) w.r.t. θ, ν, λ (estimates of Eqs. 5-7) (see Appendix A.2 of [13]). Γθ is an operator that projects a vector θ ∈Rκ1 to the closest point in a compact and convex set Θ ⊂Rκ1, and Γν and Γλ are projection operators to [−Cmax 1−γ , Cmax 1−γ ] and [0, λmax], respectively. These projection operators are necessary to ensure the convergence of the algorithm. The step-size schedules satisfy the standard conditions for stochastic approximation algorithms, and ensure that the VaR parameter ν update is on the fastest time-scale ζ3(i) , the policy parameter θ update is on the intermediate time-scale ζ2(i) , and the Lagrange multiplier λ update is on the slowest time-scale ζ1(i) (see Appendix A.1 of [13] for the conditions on the step-size schedules). This results in a three timescale stochastic approximation algorithm. We prove that our policy gradient algorithm converges to a (local) saddle point of the risk-sensitive objective function L(θ, ν, λ) (see Appendix A.3 of [13]). Algorithm 1 Trajectory-based Policy Gradient Algorithm for CVaR Optimization Input: parameterized policy µ(·|·; θ), confidence level α, and loss tolerance β Initialization: policy parameter θ = θ0, VaR parameter ν = ν0, and the Lagrangian parameter λ = λ0 for i = 0, 1, 2, . . . do for j = 1, 2, . . . do Generate N trajectories {ξj,i}N j=1 by starting at x0 = x0 and following the current policy θi. end for ν Update: νi+1 = Γν νi −ζ3(i) λi − λi (1 −α)N N X j=1 1 D(ξj,i) ≥νi θ Update: θi+1 = Γθ θi −ζ2(i) 1 N N X j=1 ∇θ log Pθ(ξj,i)|θ=θiD(ξj,i) + λi (1 −α)N N X j=1 ∇θ log Pθ(ξj,i)|θ=θi D(ξj,i) −νi 1 D(ξj,i) ≥νi λ Update: λi+1 = Γλ λi + ζ1(i) νi −β + 1 (1 −α)N N X j=1 D(ξj,i) −νi 1 D(ξj,i) ≥νi end for return parameters ν, θ, λ 5 Incremental Actor-Critic Algorithms As mentioned in Section 4, the unit of observation in our policy gradient algorithm (Algorithm 1) is a system trajectory. This may result in high variance for the gradient estimates, especially when the length of the trajectories is long. To address this issue, in this section, we propose two actor-critic 4 algorithms that use linear approximation for some quantities in the gradient estimates and update the parameters incrementally (after each state-action transition). We present two actor-critic algorithms for optimizing the risk-sensitive measure (4). These algorithms are based on the gradient estimates of Sections 5.1-5.3. While the first algorithm (SPSA-based) is fully incremental and updates all the parameters θ, ν, λ at each time-step, the second one updates θ at each time-step and updates ν and λ only at the end of each trajectory, thus given the name semi trajectory-based. Algorithm 2 contains the pseudo-code of these algorithms. The projection operators Γθ, Γν, and Γλ are defined as in Section 4 and are necessary to ensure the convergence of the algorithms. The step-size schedules satisfy the standard conditions for stochastic approximation algorithms, and ensures that the critic update is on the fastest time-scale ζ4(i) , the policy and VaR parameter updates are on the intermediate time-scale, with ν-update ζ3(i) being faster than θ-update ζ2(i) , and finally the Lagrange multiplier update is on the slowest time-scale ζ1(i) (see Appendix B.1 of [13] for the conditions on these step-size schedules). This results in four time-scale stochastic approximation algorithms. We prove that these actor-critic algorithms converge to a (local) saddle point of the risk-sensitive objective function L(θ, ν, λ) (see Appendix B.4 of [13]). 5.1 Gradient w.r.t. the Policy Parameters θ The gradient of our objective function w.r.t. the policy parameters θ in (5) may be rewritten as ∇θL(θ, ν, λ) = ∇θ E Dθ(x0) + λ (1 −α)E h Dθ(x0) −ν +i . (8) Given the original MDP M = (X, A, C, P, P0) and the parameter λ, we define the augmented MDP ¯ M = ( ¯ X, ¯ A, ¯C, ¯P, ¯P0) as ¯ X = X × R, ¯ A = A, ¯P0(x, s) = P0(x)1{s0 = s}, and ¯C(x, s, a) = λ(−s)+/(1 −α) if x = xT C(x, a) otherwise , ¯P(x′, s′|x, s, a) = P(x′|x, a) if s′ = s −C(x, a) /γ 0 otherwise where xT is any terminal state of the original MDP M and sT is the value of the s part of the state when a policy θ reaches a terminal state xT after T steps, i.e., sT = 1 γT ν −PT −1 k=0 γkC(xk, ak) . We define a class of parameterized stochastic policies µ(·|x, s; θ), (x, s) ∈¯ X, θ ∈Θ ⊆Rκ1 for this augmented MDP. Thus, the total (discounted) loss of this trajectory can be written as T −1 X k=0 γkC(xk, ak) + γT ¯C(xT , sT , a) = Dθ(x0) + λ (1 −α) Dθ(x0) −ν +. (9) From (9), it is clear that the quantity in the parenthesis of (8) is the value function of the policy θ at state (x0, ν) in the augmented MDP ¯ M, i.e., V θ(x0, ν). Thus, it is easy to show that (the second equality in Eq. 10 is the result of the policy gradient theorem [21]) ∇θL(θ, ν, λ) = ∇θV θ(x0, ν) = 1 1 −γ X x,s,a πθ γ(x, s, a|x0, ν) ∇log µ(a|x, s; θ) Qθ(x, s, a), (10) where πθ γ is the discounted visiting distribution (defined in Section 2) and Qθ is the action-value function of policy θ in the augmented MDP ¯ M. We can show that 1 1−γ ∇log µ(ak|xk, sk; θ) · δk is an unbiased estimate of ∇θL(θ, ν, λ), where δk = ¯C(xk, sk, ak) + γ bV (xk+1, sk+1) −bV (xk, sk) is the temporal-difference (TD) error in ¯ M, and bV is an unbiased estimator of V θ (see e.g., [6, 7]). In our actor-critic algorithms, the critic uses linear approximation for the value function V θ(x, s) ≈ v⊤φ(x, s) = eV θ,v(x, s), where the feature vector φ(·) belongs to the low-dimensional space Rκ2. 5.2 Gradient w.r.t. the Lagrangian Parameter λ We may rewrite the gradient of our objective function w.r.t. the Lagrangian parameters λ in (7) as ∇λL(θ, ν, λ) = ν −β +∇λ E Dθ(x0) + λ (1 −α)E h Dθ(x0) −ν +i (a)= ν −β +∇λV θ(x0, ν). (11) Similar to Section 5.1, (a) comes from the fact that the quantity in the parenthesis in (11) is V θ(x0, ν), the value function of the policy θ at state (x0, ν) in the augmented MDP ¯ M. Note that the dependence of V θ(x0, ν) on λ comes from the definition of the cost function ¯C in ¯ M. We now derive an expression for ∇λV θ(x0, ν), which in turn will give us an expression for ∇λL(θ, ν, λ). 5 Lemma 1 The gradient of V θ(x0, ν) w.r.t. the Lagrangian parameter λ may be written as ∇λV θ(x0, ν) = 1 1 −γ X x,s,a πθ γ(x, s, a|x0, ν) 1 (1 −α)1{x = xT }(−s)+. (12) Proof. See Appendix B.2 of [13]. ■ From Lemma 1 and (11), it is easy to see that ν −β + 1 (1−γ)(1−α)1{x = xT }(−s)+ is an unbiased estimate of ∇λL(θ, ν, λ). An issue with this estimator is that its value is fixed to νk −β all along a system trajectory, and only changes at the end to νk −β + 1 (1−γ)(1−α)(−sT )+. This may affect the incremental nature of our actor-critic algorithm. To address this issue, we propose a different approach to estimate the gradients w.r.t. θ and λ in Sec. 5.4 (of course this does not come for free). Another important issue is that the above estimator is unbiased only if the samples are generated from the distribution πθ γ(·|x0, ν). If we just follow the policy, then we may use νk−β+ γk (1−α)1{xk = xT }(−sk)+ as an estimate for ∇λL(θ, ν, λ). Note that this is an issue for all discounted actor-critic algorithms that their (likelihood ratio based) estimate for the gradient is unbiased only if the samples are generated from πθ γ, and not when we simply follow the policy. This might be a reason that we have no convergence analysis (to the best of our knowledge) for (likelihood ratio based) discounted actor-critic algorithms.2 5.3 Sub-Gradient w.r.t. the VaR Parameter ν We may rewrite the sub-gradient of our objective function w.r.t. the VaR parameter ν (Eq. 6) as ∂νL(θ, ν, λ) ∋λ 1 − 1 (1 −α)P ∞ X k=0 γkC(xk, ak) ≥ν | x0 = x0; θ . (13) From the definition of the augmented MDP ¯ M, the probability in (13) may be written as P(sT ≤ 0 | x0 = x0, s0 = ν; θ), where sT is the s part of the state in ¯ M when we reach a terminal state, i.e., x = xT (see Section 5.1). Thus, we may rewrite (13) as ∂νL(θ, ν, λ) ∋λ 1 − 1 (1 −α)P sT ≤0 | x0 = x0, s0 = ν; θ . (14) From (14), it is easy to see that λ−λ1{sT ≤0}/(1−α) is an unbiased estimate of the sub-gradient of L(θ, ν, λ) w.r.t. ν. An issue with this (unbiased) estimator is that it can be only applied at the end of a system trajectory (i.e., when we reach the terminal state xT ), and thus, using it prevents us of having a fully incremental algorithm. In fact, this is the estimator that we use in our semi trajectory-based actor-critic algorithm. One approach to estimate this sub-gradient incrementally is to use simultaneous perturbation stochastic approximation (SPSA) method [8]. The idea of SPSA is to estimate the sub-gradient g(ν) ∈∂νL(θ, ν, λ) using two values of g at ν−= ν −∆and ν+ = ν + ∆, where ∆> 0 is a positive perturbation (see [8, 17] for the detailed description of ∆).3 In order to see how SPSA can help us to estimate our sub-gradient incrementally, note that ∂νL(θ, ν, λ) = λ + ∂ν E Dθ(x0) + λ (1 −α)E h Dθ(x0) −ν +i (a)= λ + ∂νV θ(x0, ν). (15) Similar to Sections 5.1, (a) comes from the fact that the quantity in the parenthesis in (15) is V θ(x0, ν), the value function of the policy θ at state (x0, ν) in the augmented MDP ¯ M. Since the critic uses a linear approximation for the value function, i.e., V θ(x, s) ≈v⊤φ(x, s), in our actor-critic algorithms (see Section 5.1 and Algorithm 2), the SPSA estimate of the sub-gradient would be of the form g(ν) ≈λ + v⊤ φ(x0, ν+) −φ(x0, ν−) /2∆. 5.4 An Alternative Approach to Compute the Gradients In this section, we present an alternative way to compute the gradients, especially those w.r.t. θ and λ. This allows us to estimate the gradient w.r.t. λ in a (more) incremental fashion (compared to the method of Section 5.3), with the cost of the need to use two different linear function approximators 2Note that the discounted actor-critic algorithm with convergence proof in [5] is based on SPSA. 3SPSA-based gradient estimate was first proposed in [25] and has been widely used in various settings, especially those involving high-dimensional parameter. The SPSA estimate described above is two-sided. It can also be implemented single-sided, where we use the values of the function at ν and ν+. We refer the readers to [8] for more details on SPSA and to [17] for its application in learning in risk-sensitive MDPs. 6 (instead of one used in Algorithm 2). In this approach, we define the augmented MDP slightly different than the one in Section 5.3. The only difference is in the definition of the cost function, which is defined here as (note that C(x, a) has been replaced by 0 and λ has been removed) ¯C(x, s, a) = (−s)+/(1 −α) if x = xT , 0 otherwise, where xT is any terminal state of the original MDP M. It is easy to see that he term 1 (1−α)E h Dθ(x0) −ν +i appearing in the gradients of Eqs. 5-7 is the value function of the policy θ at state (x0, ν) in this augmented MDP. As a result, we have Gradient w.r.t. θ: It is easy to see that now this gradient (Eq. 5) is the gradient of the value function of the original MDP, ∇θV θ(x0), plus λ times the gradient of the value function of the augmented MDP, ∇θV θ(x0, ν), both at the initial states of these MDPs (with abuse of notation, we use V for the value function of both MDPs). Thus, using linear approximators u⊤f(x, s) and v⊤φ(x, s) for the value functions of the original and augmented MDPs, ∇θL(θ, ν, λ) can be estimated as ∇θ log µ(ak|xk, sk; θ) · (ϵk + λδk), where ϵk and δk are the TD-errors of these MDPs. Gradient w.r.t. λ: Similar to the case for θ, it is easy to see that this gradient (Eq. 7) is ν −β plus the value function of the augmented MDP, V θ(x0, ν), and thus, can be estimated incrementally as ∇λL(θ, ν, λ) ≈ν −β + v⊤φ(x, s). Sub-Gradient w.r.t. ν: This sub-gradient (Eq. 6) is λ times one plus the gradient w.r.t. ν of the value function of the augmented MDP, ∇νV θ(x0, ν), and thus, it can be estimated incrementally using SPSA as λ 1 + v⊤ φ(x0,ν+)−φ(x0,ν−) 2∆ . Algorithm 3 in Appendix B.3 of [13] contains the pseudo-code of the resulting algorithm. Algorithm 2 Actor-Critic Algorithms for CVaR Optimization Input: Parameterized policy µ(·|·; θ) and value function feature vector φ(·) (both over the augmented MDP ¯ M), confidence level α, and loss tolerance β Initialization: policy parameters θ = θ0; VaR parameter ν = ν0; Lagrangian parameter λ = λ0; value function weight vector v = v0 // (1) SPSA-based Algorithm: for k = 0, 1, 2, . . . do Draw action ak ∼µ(·|xk, sk; θk); Observe cost ¯C(xk, sk, ak) (with λ = λk); Observe next state (xk+1, sk+1) ∼¯P(·|xk, sk, ak); // note that sk+1 = (sk −C xk, ak) /γ TD Error: δk = ¯C(xk, sk, ak) + γv⊤ k φ(xk+1, sk+1) −v⊤ k φ(xk, sk) (16) Critic Update: vk+1 = vk + ζ4(k)δkφ(xk, sk) (17) ν Update: νk+1 = Γν νk −ζ3(k) λk + v⊤ k φ x0, νk + ∆k −φ(x0, νk −∆k) 2∆k ! (18) θ Update: θk+1 = Γθ θk −ζ2(k) 1 −γ ∇θ log µ(ak|xk, sk; θ) · δk (19) λ Update: λk+1 = Γλ λk + ζ1(k) νk −β + 1 (1 −α)(1 −γ)1{xk = xT }(−sk)+ (20) if xk = xT (reach a terminal state), then set (xk+1, sk+1) = (x0, νk+1) end for // (2) Semi Trajectory-based Algorithm: for k = 0, 1, 2, . . . do if xk ̸= xT then Draw action ak ∼µ(·|xk, sk; θk), observe cost ¯C(xk, sk, ak) (with λ = λk), and next state (xk+1, sk+1) ∼¯P(·|xk, sk, ak); Update (δk, vk, θk, λk) using Eqs. 16, 17, 19, and 20 else Update (δk, vk, θk, λk) using Eqs. 16, 17, 19, and 20; Update ν as ν Update: νk+1 = Γν νk −ζ3(k) λk − λk 1 −α1 sT ≤0 (21) Set (xk+1, sk+1) = (x0, νk+1) end if end for return policy and value function parameters θ, ν, λ, v 7 6 Experimental Results We consider an optimal stopping problem in which the state at each time step k ≤T consists of the cost ck and time k, i.e., x = (ck, k), where T is the stopping time. The agent (buyer) should decide either to accept the present cost or wait. If she accepts or when k = T, the system reaches a terminal state and the cost ck is received, otherwise, she receives the cost ph and the new state is (ck+1, k+1), where ck+1 is fuck w.p. p and fdck w.p. 1 −p (fu > 1 and fd < 1 are constants). Moreover, there is a discounted factor γ ∈(0, 1) to account for the increase in the buyer’s affordability. The problem has been described in more details in Appendix C of [13]. Note that if we change cost to reward and minimization to maximization, this is exactly the American option pricing problem, a standard testbed to evaluate risk-sensitive algorithms (e.g., [26]). Since the state space is continuous, finding an exact solution via DP is infeasible, and thus, it requires approximation and sampling techniques. We compare the performance of our risk-sensitive policy gradient Algorithm 1 (PG-CVaR) and two actor-critic Algorithms 2 (AC-CVaR-SPSA,AC-CVaR-Semi-Traj) with their risk-neutral counterparts (PG and AC) (see Appendix C of [13] for the details of these experiments). Figure 1 shows the distribution of the discounted cumulative cost Dθ(x0) for the policy θ learned by each of these algorithms. The results indicate that the risk-sensitive algorithms yield a higher expected loss, but less variance, compared to the risk-neutral methods. More precisely, the loss distributions of the risksensitive algorithms have lower right-tail than their risk-neutral counterparts. Table 1 summarizes the performance of these algorithms. The numbers reiterate what we concluded from Figure 1. −40−20 0 20 40 60 0 0.02 0.04 0.06 Reward Probability Mean−CVaR Mean −50 0 50 100 0 0.05 0.1 0.15 Reward Probability Mean−CVaR Mean−CVaR SPSA Mean Figure 1: Loss distributions for the policies learned by the risk-sensitive and risk-neutral policy gradient and actor critic algorithms. The two left figures correspond to the PG methods, and the two right figures correspond to the AC algorithms. In all cases, the loss tolerance equals to β = 40. E(Dθ(x0)) σ(Dθ(x0)) CVaR(Dθ(x0)) PG 16.08 17.53 69.18 PG-CVaR 19.75 7.06 25.75 AC 16.96 32.09 122.61 AC-CVaR-SPSA 22.86 3.40 31.36 AC-CVaR-Semi-Traj. 23.01 4.98 34.81 Table 1: Performance comparison for the policies learned by the risk-sensitive and risk-neutral algorithms. 7 Conclusions and Future Work We proposed novel policy gradient and actor critic (AC) algorithms for CVaR optimization in MDPs. We provided proofs of convergence (in [13]) to locally risk-sensitive optimal policies for the proposed algorithms. Further, using an optimal stopping problem, we observed that our algorithms resulted in policies whose loss distributions have lower right-tail compared to their risk-neutral counterparts. This is extremely important for a risk averse decision-maker, especially if the righttail contains catastrophic losses. Future work includes: 1) Providing convergence proofs for our AC algorithms when the samples are generated by following the policy and not from its discounted visiting distribution, 2) Using importance sampling methods [2, 27] to improve gradient estimates in the right-tail of the loss distribution (worst-case events that are observed with low probability) of the CVaR objective function, and 4) Evaluating our algorithms in more challenging problems. Acknowledgement The authors would like to thank Professor Marco Pavone and Lucas Janson for their comments that helped us with some technical details in the proofs of the algorithms. 8 References [1] P. Artzner, F. Delbaen, J. Eber, and D. Heath. Coherent measures of risk. Journal of Mathematical Finance, 9(3):203–228, 1999. [2] O. Bardou, N. Frikha, and G. Pag`es. Computing VaR and CVaR using stochastic approximation and adaptive unconstrained importance sampling. Monte Carlo Methods and Applications, 15(3):173–210, 2009. [3] N. B¨auerle and J. Ott. Markov decision processes with average-value-at-risk criteria. Mathematical Methods of Operations Research, 74(3):361–379, 2011. [4] D. Bertsekas. Nonlinear programming. Athena Scientific, 1999. [5] S. Bhatnagar. An actor-critic algorithm with function approximation for discounted cost constrained Markov decision processes. Systems & Control Letters, 59(12):760–766, 2010. [6] S. Bhatnagar, R. Sutton, M. Ghavamzadeh, and M. Lee. Incremental natural actor-critic algorithms. In Proceedings of Advances in Neural Information Processing Systems 20, pages 105–112, 2008. [7] S. Bhatnagar, R. Sutton, M. Ghavamzadeh, and M. Lee. Natural actor-critic algorithms. Automatica, 45 (11):2471–2482, 2009. [8] S. Bhatnagar, H. Prasad, and L.A. Prashanth. Stochastic Recursive Algorithms for Optimization, volume 434. Springer, 2013. [9] K. Boda and J. Filar. Time consistent dynamic risk measures. Mathematical Methods of Operations Research, 63(1):169–186, 2006. [10] V. Borkar. A sensitivity formula for the risk-sensitive cost and the actor-critic algorithm. Systems & Control Letters, 44:339–346, 2001. [11] V. Borkar. Q-learning for risk-sensitive control. Mathematics of Operations Research, 27:294–311, 2002. [12] V. Borkar and R. Jain. Risk-constrained Markov decision processes. IEEE Transaction on Automatic Control, 2014. [13] Y. Chow, M. Ghavamzadeh, L. Janson, and M. Pavone. Algorithms for CVaR optimization in MDPs. arXiv:1406.3339, 2014. [14] J. Filar, L. Kallenberg, and H. Lee. Variance-penalized Markov decision processes. Mathematics of Operations Research, 14(1):147–161, 1989. [15] J. Filar, D. Krass, and K. Ross. Percentile performance criteria for limiting average Markov decision processes. IEEE Transaction of Automatic Control, 40(1):2–10, 1995. [16] R. Howard and J. Matheson. Risk sensitive Markov decision processes. Management Science, 18(7): 356–369, 1972. [17] Prashanth L.A. and M. Ghavamzadeh. Actor-critic algorithms for risk-sensitive MDPs. In Proceedings of Advances in Neural Information Processing Systems 26, pages 252–260, 2013. [18] H. Markowitz. Portfolio Selection: Efficient Diversification of Investment. John Wiley and Sons, 1959. [19] T. Morimura, M. Sugiyama, M. Kashima, H. Hachiya, and T. Tanaka. Nonparametric return distribution approximation for reinforcement learning. In Proceedings of the 27th International Conference on Machine Learning, pages 799–806, 2010. [20] J. Ott. A Markov Decision Model for a Surveillance Application and Risk-Sensitive Markov Decision Processes. PhD thesis, Karlsruhe Institute of Technology, 2010. [21] J. Peters, S. Vijayakumar, and S. Schaal. Natural actor-critic. In Proceedings of the Sixteenth European Conference on Machine Learning, pages 280–291, 2005. [22] M. Petrik and D. Subramanian. An approximate solution method for large risk-averse Markov decision processes. In Proceedings of the 28th International Conference on Uncertainty in Artificial Intelligence, 2012. [23] R. Rockafellar and S. Uryasev. Optimization of conditional value-at-risk. Journal of Risk, 26:1443–1471, 2002. [24] M. Sobel. The variance of discounted Markov decision processes. Applied Probability, pages 794–802, 1982. [25] J. Spall. Multivariate stochastic approximation using a simultaneous perturbation gradient approximation. IEEE Transactions on Automatic Control, 37(3):332–341, 1992. [26] A. Tamar, D. Di Castro, and S. Mannor. Policy gradients with variance related risk criteria. In Proceedings of the Twenty-Ninth International Conference on Machine Learning, pages 387–396, 2012. [27] A. Tamar, Y. Glassner, and S. Mannor. Policy gradients beyond expectations: Conditional value-at-risk. arXiv:1404.3862v1, 2014. 9
|
2014
|
204
|
5,297
|
Structure Regularization for Structured Prediction Xu Sun∗† ∗MOE Key Laboratory of Computational Linguistics, Peking University †School of Electronics Engineering and Computer Science, Peking University xusun@pku.edu.cn Abstract While there are many studies on weight regularization, the study on structure regularization is rare. Many existing systems on structured prediction focus on increasing the level of structural dependencies within the model. However, this trend could have been misdirected, because our study suggests that complex structures are actually harmful to generalization ability in structured prediction. To control structure-based overfitting, we propose a structure regularization framework via structure decomposition, which decomposes training samples into mini-samples with simpler structures, deriving a model with better generalization power. We show both theoretically and empirically that structure regularization can effectively control overfitting risk and lead to better accuracy. As a by-product, the proposed method can also substantially accelerate the training speed. The method and the theoretical results can apply to general graphical models with arbitrary structures. Experiments on well-known tasks demonstrate that our method can easily beat the benchmark systems on those highly-competitive tasks, achieving record-breaking accuracies yet with substantially faster training speed. 1 Introduction Structured prediction models are popularly used to solve structure dependent problems in a wide variety of application domains including natural language processing, bioinformatics, speech recognition, and computer vision. Recently, many existing systems on structured prediction focus on increasing the level of structural dependencies within the model. We argue that this trend could have been misdirected, because our study suggests that complex structures are actually harmful to model accuracy. While it is obvious that intensive structural dependencies can effectively incorporate structural information, it is less obvious that intensive structural dependencies have a drawback of increasing the generalization risk, because more complex structures are easier to suffer from overfitting. Since this type of overfitting is caused by structure complexity, it can hardly be solved by ordinary regularization methods such as L2 and L1 regularization schemes, which is only for controlling weight complexity. To deal with this problem, we propose a simple structure regularization solution based on tag structure decomposition. The proposed method decomposes each training sample into multiple minisamples with simpler structures, deriving a model with better generalization power. The proposed method is easy to implement, and it has several interesting properties: (1) We show both theoretically and empirically that the proposed method can effectively reduce the overfitting risk on structured prediction. (2) The proposed method does not change the convexity of the objective function, such that a convex function penalized with a structure regularizer is still convex. (3) The proposed method has no conflict with the weight regularization. Thus we can apply structure regularization together with weight regularization. (4) The proposed method can accelerate the convergence rate in training. The term structural regularization has been used in prior work for regularizing structures of features, including spectral regularization [1], regularizing feature structures for classifiers [20], and many 1 recent studies on structured sparsity in structured prediction scenarios [11, 8], via adopting mixed norm regularization [10], Group Lasso [22], and posterior regularization [5]. Compared with those prior work, we emphasize that our proposal on tag structure regularization is novel. This is because the term structure in all of the aforementioned work refers to structures of feature space, which is substantially different compared with our proposal on regularizing tag structures (interactions among tags). Also, there are some other related studies. [17] described an interesting heuristic piecewise training method. [19] described a “lookahead" learning method. Our work differs from [17] and [19] mainly because our work is built on a regularization framework, with arguments and theoretical justifications on reducing generalization risk and improving convergence rate. Also, our method and the theoretical results can fit general graphical models with arbitrary structures, and the detailed algorithm is very different. On generalization risk analysis, related studies include [2, 12] on non-structured classification and [18, 7] on structured classification. To the best of our knowledge, this is the first theoretical result on quantifying the relation between structure complexity and the generalization risk in structured prediction, and this is also the first proposal on structure regularization via regularizing tag-interactions. The contributions of this work1 are two-fold: • On the methodology side, we propose a structure regularization framework for structured prediction. We show both theoretically and empirically that the proposed method can effectively reduce the overfitting risk, and at the same time accelerate the convergence rate in training. Our method and the theoretical analysis do not make assumptions based on specific structures. In other words, the method and the theoretical results can apply to graphical models with arbitrary structures, including linear chains, trees, and general graphs. • On the application side, for several important natural language processing tasks, our simple method can easily beat the benchmark systems on those highly-competitive tasks, achieving record-breaking accuracies as well as substantially faster training speed. 2 Structure Regularization A graph of observations (even with arbitrary structures) can be indexed and be denoted by using an indexed sequence of observations OOO = {o1, . . . , on}. We use the term sample to denote OOO = {o1, . . . , on}. For example, in natural language processing, a sample may correspond to a sentence of n words with dependencies of tree structures (e.g., in syntactic parsing). For simplicity in analysis, we assume all samples have n observations (thus n tags). In a typical setting of structured prediction, all the n tags have inter-dependencies via connecting each Markov dependency between neighboring tags. Thus, we call n as tag structure complexity or simply structure complexity below. A sample is converted to an indexed sequence of feature vectors xxx = {xxx(1), . . . ,xxx(n)}, where xxx(k) ∈ X is of the dimension d and corresponds to the local features extracted from the position/index k. We can use an n × d matrix to represent xxx ∈X n. Let Z = (X n, Yn) and let zzz = (xxx,yyy) ∈Z denote a sample in the training data. Suppose a training set is S = {zzz1 = (xxx1,yyy1), . . . ,zzzm = (xxxm,yyym)}, with size m, and the samples are drawn i.i.d. from a distribution D which is unknown. A learning algorithm is a function G : Zm 7→F with the function space F ⊂{X n 7→Yn}, i.e., G maps a training set S to a function GS : X n 7→Yn. We suppose G is symmetric with respect to S, so that G is independent on the order of S. Structural dependencies among tags are the major difference between structured prediction and nonstructured classification. For the latter case, a local classification of g based on a position k can be expressed as g(xxx(k−a), . . . ,xxx(k+a)), where the term {xxx(k−a), . . . ,xxx(k+a)} represents a local window. However, for structured prediction, a local classification on a position depends on the whole input xxx = {xxx(1), . . . ,xxx(n)} rather than a local window, due to the nature of structural dependencies among tags (e.g., graphical models like CRFs). Thus, in structured prediction a local classification on k should be denoted as g(xxx(1), . . . ,xxx(n), k). To simplify the notation, we define g(xxx, k) ≜g(xxx(1), . . . ,xxx(n), k) 1See the code at http://klcl.pku.edu.cn/member/sunxu/code.htm 2 y(1) x(1) y(2) y(3) y(4) y(5) y(6) y(1) y(2) y(3) y(4) y(5) y(6) x(2) x(3) x(4) x(5) x(6) x(1) x(2) x(3) x(4) x(5) x(6) Figure 1: An illustration of structure regularization in simple linear chain case, which decompose a training sample zzz with structure complexity 6 into three mini-samples with structure complexity 2. Structure regularization can apply to more general graphs with arbitrary dependencies. We define point-wise cost function c : Y×Y 7→R+ as c[GS(xxx, k),yyy(k)], which measures the cost on a position k by comparing GS(xxx, k) and the gold-standard tag yyy(k), and we introduce the point-wise loss as ℓ(GS,zzz, k) ≜c[GS(xxx, k),yyy(k)] Then, we define sample-wise cost function C : Yn × Yn 7→R+, which is the cost function with respect to a whole sample, and we introduce the sample-wise loss as L(GS,zzz) ≜C[GS(xxx),yyy] = n ∑ k=1 ℓ(GS,zzz, k) = n ∑ k=1 c[GS(xxx, k),yyy(k)] Given G and a training set S, what we are most interested in is the generalization risk in structured prediction (i.e., expected average loss) [18, 7]: R(GS) = Ezzz [L(GS,zzz) n ] Since the distribution D is unknown, we have to estimate R(GS) by using the empirical risk: Re(GS) = 1 mn m ∑ i=1 L(GS,zzzi) = 1 mn m ∑ i=1 n ∑ k=1 ℓ(GS,zzzi, k) To state our theoretical results, we must describe several quantities and assumptions following prior work [2, 12]. We assume a simple real-valued structured prediction scheme such that the class predicted on position k of xxx is the sign of GS(xxx, k) ∈D.2 Also, we assume the point-wise cost function cτ is convex and τ-smooth such that ∀y1, y2 ∈D, ∀y∗∈Y |cτ(y1, y∗) −cτ(y2, y∗)| ≤τ|y1 −y2| (1) Also, we use a value ρ to quantify the bound of |GS(xxx, k) −GS\i(xxx, k)| while changing a single sample (with size n′ ≤n) in the training set with respect to the structured input xxx. This ρ-admissible assumption can be formulated as ∀k, |GS(xxx, k) −GS\i(xxx, k)| ≤ρ||GS −GS\i||2 · ||xxx||2 (2) where ρ ∈R+ is a value related to the design of algorithm G. 2.1 Structure Regularization Most existing regularization techniques are for regularizing model weights/parameters (e.g., a representative regularizer is the Gaussian regularizer or so called L2 regularizer), and we call such regularization techniques as weight regularization. Definition 1 (Weight regularization) Let Nλ : F 7→R+ be a weight regularization function on F with regularization strength λ, the structured classification based objective function with general weight regularization is as follows: Rλ(GS) ≜Re(GS) + Nλ(GS) (3) 2In practice, many popular structured prediction models have a convex and real-valued cost function (e.g., CRFs). 3 Algorithm 1 Training with structure regularization 1: Input: model weights www, training set S, structure regularization strength α 2: repeat 3: S′ ←∅ 4: for i = 1 →m do 5: Randomly decompose zzzi ∈S into mini-samples Nα(zzzi) = {zzz(i,1), . . . ,zzz(i,α)} 6: S′ ←S′ ∪Nα(zzzi) 7: end for 8: for i = 1 →|S′| do 9: Sample zzz′ uniformly at random from S′, with gradient ∇gzzz′(www) 10: www ←www −η∇gzzz′(www) 11: end for 12: until Convergence 13: return www While weight regularization is normalizing model weights, the proposed structure regularization method is normalizing the structural complexity of the training samples. As illustrated in Figure 1, our proposal is based on tag structure decomposition, which can be formally defined as follows: Definition 2 (Structure regularization) Let Nα : F 7→F be a structure regularization function on F with regularization strength α with 1 ≤α ≤n, the structured classification based objective function with structure regularization is as follows3: Rα(GS) ≜Re[GNα(S)] = 1 mn m ∑ i=1 α ∑ j=1 L[GS′,zzz(i,j)] = 1 mn m ∑ i=1 α ∑ j=1 n/α ∑ k=1 ℓ[GS′,zzz(i,j), k] (4) where Nα(zzzi) randomly splits zzzi into α mini-samples {zzz(i,1), . . . ,zzz(i,α)}, so that the mini-samples have a distribution on their sizes (structure complexities) with the expected value n′ = n/α. Thus, we get S′ = {zzz(1,1), z(1,2), . . . ,zzz(1,α) | {z } α , . . . ,zzz(m,1),zzz(m,2), . . . ,zzz(m,α) | {z } α } (5) with mα mini-samples with expected structure complexity n/α. We can denote S′ more compactly as S′ = {zzz′ 1,zzz′ 2, . . . ,zzz′ mα} and Rα(GS) can be simplified as Rα(GS) ≜ 1 mn mα ∑ i=1 L(GS′,zzz′ i) = 1 mn mα ∑ i=1 n/α ∑ k=1 ℓ[GS′,zzz′ i, k] (6) When the structure regularization strength α = 1, we have S′ = S and Rα = Re. The structure regularization algorithm (with the stochastic gradient descent setting) is summarized in Algorithm 1. Recall that xxx = {xxx(1), . . . ,xxx(n)} represents feature vectors. Thus, it should be emphasized that the decomposition of xxx is the decomposition of the feature vectors, not the original observations. Actually the decomposition of the feature vectors is more convenient and has no information loss — decomposing observations needs to regenerate features and may lose some features. The structure regularization has no conflict with the weight regularization, and the structure regularization can be applied together with the weight regularization. Definition 3 (Structure & weight regularization) By combining structure regularization in Definition 2 and weight regularization in Definition 1, the structured classification based objective function is as follows: Rα,λ(GS) ≜Rα(GS) + Nλ(GS) (7) When α = 1, we have Rα,λ = Re(GS) + Nλ(GS) = Rλ. Like existing weight regularization methods, currently our structure regularization is only for the training stage. Currently we do not use structure regularization in the test stage. 3The notation N is overloaded here. For clarity throughout, N with subscript λ refers to weight regularization function, and N with subscript α refers to structure regularization function. 4 2.2 Reduction of Generalization Risk In contrast to the simplicity of the algorithm, the theoretical analysis is quite technical. In this paper we only describe the major theoretical result. Detailed analysis and proofs are given in the full version of this work [14]. Theorem 4 (Generalization vs. structure regularization) Let the structured prediction objective function of G be penalized by structure regularization with factor α ∈[1, n] and L2 weight regularization with factor λ, and the penalized function has a minimizer f: f = argmin g∈F Rα,λ(g) = argmin g∈F ( 1 mn mα ∑ j=1 Lτ(g,zzz′ j) + λ 2 ||g||2 2 ) (8) Assume the point-wise loss ℓτ is convex and differentiable, and is bounded by ℓτ(f,zzz, k) ≤γ. Assume f(xxx, k) is ρ-admissible. Let a local feature value be bounded by v such that xxx(k,q) ≤v for q ∈{1, . . . , d}. Then, for any δ ∈(0, 1), with probability at least 1 −δ over the random draw of the training set S, the generalization risk R(f) is bounded by R(f) ≤Re(f) + 2dτ 2ρ2v2n2 mλα + ((4m −2)dτ 2ρ2v2n2 mλα2 + γ )√ α ln δ−1 2m (9) Since τ, ρ, and v are typically small compared with other variables, especially m, (9) can be approximated as follows by ignoring small terms: R(f) ≤Re(f) + O (dn2√ ln δ−1 λα1.5√m ) (10) The proof is given in the full version of this work [14]. We call the term O ( dn2√ ln δ−1 λα1.5√m ) in (10) as “overfit-bound", and reducing the overfit-bound is crucial for reducing the generalization risk bound. First, (10) suggests that structure complexity n can increase the overfit-bound on a magnitude of O(n2), and applying weight regularization can reduce the overfit-bound by O(λ). Importantly, applying structure regularization further (over weight regularization) can additionally reduce the overfit-bound by a magnitude of O(α1.5). Since many applications in practice are based on sparse features, using a sparse feature assumption can further improve the generalization bound. The improved generalization bounds are given in the full version of this work [14]. 2.3 Accelerating Convergence Rates in Training We also analyze the impact on the convergence rate of online learning by applying structure regularization. Following prior work [9], our analysis is based on the stochastic gradient descent (SGD) with fixed learning rate. Let g(www) be the structured prediction objective function and www ∈W is the weight vector. Recall that the SGD update with fixed learning rate η has a form like this: wwwt+1 ←wwwt −η∇gzzzt(wwwt) (11) where gzzz(wwwt) is the stochastic estimation of the objective function based on zzz which is randomly drawn from S. To state our convergence rate analysis results, we need several assumptions following (Nemirovski et al. 2009). We assume g is strongly convex with modulus c, that is, ∀www,www′ ∈W, g(www′) ≥g(www) + (www′ −www)T ∇g(www) + c 2||www′ −www||2 (12) When g is strongly convex, there is a global optimum/minimizer www∗. We also assume Lipschitz continuous differentiability of g with the constant q, that is, ∀www,www′ ∈W, ||∇g(www′) −∇g(www)|| ≤q||www′ −www|| (13) It is also reasonable to assume that the norm of ∇gzzz(www) has almost surely positive correlation with the structure complexity of zzz,4 which can be quantified by a bound κ ∈R+: ||∇gzzz(www)||2 ≤κ|zzz| almost surely for ∀www ∈W (14) 4Many structured prediction systems (e.g., CRFs) satisfy this assumption that the gradient based on a larger sample (i.e., n is large) is expected to have a larger norm. 5 where |zzz| denotes the structure complexity of zzz. Moreover, it is reasonable to assume ηc < 1 (15) because even the ordinary gradient descent methods will diverge if ηc > 1. Then, we show that structure regularization can quadratically accelerate the SGD rates of convergence: Proposition 5 (Convergence rates vs. structure regularization) With the aforementioned assumptions, let the SGD training have a learning rate defined as η = cϵβα2 qκ2n2 , where ϵ > 0 is a convergence tolerance value and β ∈(0, 1]. Let t be a integer satisfying t ≥qκ2n2 log (qa0/ϵ) ϵβc2α2 (16) where n and α ∈[1, n] is like before, and a0 is the initial distance which depends on the initialization of the weights www0 and the minimizer www∗, i.e., a0 = ||www0 −www∗||2. Then, after t updates of www it converges to E[g(wwwt) −g(www∗)] ≤ϵ. The proof is given in the full version of this work [14]. As we can see, using structure regularization with the strength α can quadratically accelerate the convergence rate with a factor of α2. 3 Experiments Diversified Tasks. The natural language processing tasks include (1) part-of-speech tagging, (2) biomedical named entity recognition, and (3) Chinese word segmentation. The signal processing task is (4) sensor-based human activity recognition. The tasks (1) to (3) use boolean features and the task (4) adopts real-valued features. From tasks (1) to (4), the averaged structure complexity (number of observations) n is very different, with n = 23.9, 26.5, 46.6, 67.9, respectively. The dimension of tags |Y| is also diversified among tasks, with |Y| ranging from 5 to 45. Part-of-Speech Tagging (POS-Tagging). Part-of-Speech (POS) tagging is an important and highly competitive task. We use the standard benchmark dataset in prior work [3], with 38,219 training samples and 5,462 test samples. Following prior work [19], we use features based on words and lexical patterns, with 393,741 raw features5. The evaluation metric is per-word accuracy. Biomedical Named Entity Recognition (Bio-NER). This task is from the BioNLP-2004 shared task [19]. There are 17,484 training samples and 3,856 test samples. Following prior work [19], we use word pattern features and POS features, with 403,192 raw features in total. The evaluation metric is balanced F-score. Word Segmentation (Word-Seg). We use the MSR data provided by SIGHAN-2004 contest [4]. There are 86,918 training samples and 3,985 test samples. The features are similar to [16], with 1,985,720 raw features in total. The evaluation metric is balanced F-score. Sensor-based Human Activity Recognition (Act-Recog). This is a task based on real-valued sensor signals, with the data extracted from the Bao04 activity recognition dataset [15]. The features are similar to [15], with 1,228 raw features in total. There are 16,000 training samples and 4,000 test samples. The evaluation metric is accuracy. We choose the CRFs [6] and structured perceptrons (Perc) [3], which are arguably the most popular probabilistic and non-probabilistic structured prediction models, respectively. The CRFs are trained using the SGD algorithm,6 and the baseline method is the traditional weight regularization scheme (WeightReg), which adopts the most representative L2 weight regularization, i.e., a Gaussian prior.7 For the structured perceptrons, the baseline WeightAvg is the popular implicit regularization technique based on parameter averaging, i.e., averaged perceptron [3]. 5Raw features are those observation features based only on xxx, i.e., no combination with tag information. 6In theoretical analysis, following prior work we adopt the SGD with fixed learning rate, as described in Section 2.3. However, since the SGD with decaying learning rate is more commonly used in practice, in experiments we use the SGD with decaying learning rate. 7We also tested on sparsity emphasized regularization methods, including L1 regularization and Group Lasso regularization [8]. However, we find that in most cases those sparsity emphasized regularization methods have lower accuracy than the L2 regularization. 6 0 5 10 15 20 97.1 97.15 97.2 97.25 97.3 97.35 97.4 POS−Tagging: CRF Mini−Sample Size (n/α) Accuracy (%) StructReg WeightReg 0 5 10 15 20 71.8 72 72.2 72.4 Bio−NER: CRF Mini−Sample Size (n/α) F−score (%) StructReg WeightReg 0 5 10 15 20 97.4 97.42 97.44 97.46 97.48 97.5 Word−Seg: CRF Mini−Sample Size (n/α) F−score (%) StructReg WeightReg 0 5 10 15 20 92.6 92.8 93 93.2 93.4 93.6 Act−Recog: CRF Mini−Sample Size (n/α) Accuracy (%) StructReg WeightReg 0 5 10 15 20 97.1 97.15 97.2 POS−Tagging: Perc Mini−Sample Size (n/α) Accuracy (%) StructReg WeightAvg 0 5 10 15 20 71.2 71.4 71.6 71.8 72 Bio−NER: Perc Mini−Sample Size (n/α) F−score (%) StructReg WeightAvg 0 5 10 15 20 96.9 97 97.1 97.2 97.3 Word−Seg: Perc Mini−Sample Size (n/α) F−score (%) StructReg WeightAvg 0 5 10 15 20 92.5 93 93.5 Act−Recog: Perc Mini−Sample Size (n/α) Accuracy (%) StructReg WeightAvg Figure 2: On the four tasks, comparing the structure regularization method (StructReg) with existing regularization methods in terms of accuracy/F-score. Row-1 shows the results on CRFs and Row-2 shows the results on structured perceptrons. Table 1: Comparing our results with the benchmark systems on corresponding tasks. POS-Tagging (Acc%) Bio-NER (F1%) Word-Seg (F1%) Benchmark system 97.33 (see [13]) 72.28 (see [19]) 97.19 (see [4]) Our results 97.36 72.43 97.50 The rich edge features [16] are employed for all methods. All methods are based on the 1st-order Markov dependency. For WeightReg, the L2 regularization strengths (i.e., λ/2 in Eq.(8)) are tuned among values 0.1, 0.5, 1, 2, 5, and are determined on the development data (POS-Tagging) or simply via 4-fold cross validation on the training set (Bio-NER, Word-Seg, and Act-Recog). With this automatic tuning for WeightReg, we set 2, 5, 1 and 5 for POS-Tagging, Bio-NER, Word-Seg, and Act-Recog tasks, respectively. 3.1 Experimental Results The experimental results in terms of accuracy/F-score are shown in Figure 2. For the CRF model, the training is convergent, and the results on the convergence state (decided by relative objective change with the threshold value of 0.0001) are shown. For the structured perceptron model, the training is typically not convergent, and the results on the 10’th iteration are shown. For stability of the curves, the results of the structured perceptrons are averaged over 10 repeated runs. Since different samples have different size n in practice, we set α being a function of n, so that the generated mini-samples are with fixed size n′ with n′ = n/α. Actually, n′ is a probabilistic distribution because we adopt randomized decomposition. For example, if n′ = 5.5, it means the minisamples are a mixture of the ones with the size 5 and the ones with the size 6, and the mean of the size distribution is 5.5. In the figure, the curves are based on n′ = 1.5, 2.5, 3.5, 5.5, 10.5, 15.5, 20.5. As we can see, the results are quite consistent. It demonstrates that structure regularization leads to higher accuracies/F-scores compared with the existing baselines. We also conduct significance tests based on t-test. Since the t-test for F-score based tasks (Bio-NER and Word-Seg) may be unreliable8, we only perform t-test for the accuracy-based tasks, i.e., POS-Tagging and Act-Recog. For POS-Tagging, the significance test suggests that the superiority of StructReg over WeightReg is very statistically significant, with p < 0.01. For Act-Recog, the significance tests suggest that both the StructReg vs. WeightReg difference and the StructReg vs. WeightAvg difference are extremely statis8Indeed we can convert F-scores to accuracy scores for t-test, but in many cases this conversion is unreliable. For example, very different F-scores may correspond to similar accuracy scores. 7 0 5 10 15 20 0.5 1 1.5 2 2.5 x 10 4 POS−Tagging: CRF Mini−Sample Size (n/α) Train−time (sec) StructReg WeightReg 0 5 10 15 20 1000 2000 3000 4000 5000 Bio−NER: CRF Mini−Sample Size (n/α) Train−time (sec) StructReg WeightReg 0 5 10 15 20 2000 2500 3000 3500 4000 4500 5000 Word−Seg: CRF Mini−Sample Size (n/α) Train−time (sec) StructReg WeightReg 0 5 10 15 20 1000 2000 3000 4000 5000 Act−Recog: CRF Mini−Sample Size (n/α) Train−time (sec) StructReg WeightReg 0 5 10 15 20 400 600 800 1000 1200 POS−Tagging: Perc Mini−Sample Size (n/α) Train−time (sec) StructReg WeightAvg 0 5 10 15 20 100 150 200 250 300 350 400 450 Bio−NER: Perc Mini−Sample Size (n/α) Train−time (sec) StructReg WeightAvg 0 5 10 15 20 350 400 450 Word−Seg: Perc Mini−Sample Size (n/α) Train−time (sec) StructReg WeightAvg 0 5 10 15 20 100 150 200 250 300 350 Act−Recog: Perc Mini−Sample Size (n/α) Train−time (sec) StructReg WeightAvg Figure 3: On the four tasks, comparing the structure regularization method (StructReg) with existing regularization methods in terms of wall-clock training time. tically significant, with p < 0.0001 in both cases. The experimental results support our theoretical analysis that structure regularization can further reduce the generalization risk over existing weight regularization techniques. Our method outperforms the benchmark systems on the three important natural language processing tasks. The POS-Tagging task is a highly competitive task, with many methods proposed, and the best report (without using extra resources) until now is achieved by using a bidirectional learning model in [13],9 with the accuracy 97.33%. Our simple method achieves better accuracy compared with all of those state-of-the-art systems. Furthermore, our method achieves as good scores as the benchmark systems on the Bio-NER and Word-Seg tasks. On the Bio-NER task, [19] achieves 72.28% based on lookahead learning and [21] achieves 72.65% based on reranking. On the Word-Seg task, [4] achieves 97.19% based on maximum entropy classification and our recent work [16] achieves 97.5% based on feature-frequency-adaptive online learning. The comparisons are summarized in Table 1. Figure 3 shows experimental comparisons in terms of wall-clock training time. As we can see, the proposed method can substantially improve the training speed. The speedup is not only from the faster convergence rates, but also from the faster processing time on the structures, because it is more efficient to process the decomposed samples with simple structures. 4 Conclusions We proposed a structure regularization framework, which decomposes training samples into minisamples with simpler structures, deriving a trained model with regularized structural complexity. Our theoretical analysis showed that this method can effectively reduce the generalization risk, and can also accelerate the convergence speed in training. The proposed method does not change the convexity of the objective function, and can be used together with any existing weight regularization methods. Note that, the proposed method and the theoretical results can fit general structures including linear chains, trees, and graphs. Experimental results demonstrated that our method achieved better results than state-of-the-art systems on several highly-competitive tasks, and at the same time with substantially faster training speed. Acknowledgments. This work was supported in part by NSFC (No.61300063). 9See a collection of the systems at http://aclweb.org/aclwiki/index.php?title=POS_ Tagging_(State_of_the_art) 8 References [1] A. Argyriou, C. A. Micchelli, M. Pontil, and Y. Ying. A spectral regularization framework for multi-task structure learning. In Proceedings of NIPS’07. MIT Press, 2007. [2] O. Bousquet and A. Elisseeff. Stability and generalization. Journal of Machine Learning Research, 2:499–526, 2002. [3] M. Collins. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of EMNLP’02, pages 1–8, 2002. [4] J. Gao, G. Andrew, M. Johnson, and K. Toutanova. A comparative study of parameter estimation methods for statistical natural language processing. In Proceedings of ACL’07, pages 824–831, 2007. [5] J. Graça, K. Ganchev, B. Taskar, and F. Pereira. Posterior vs parameter sparsity in latent variable models. In Proceedings of NIPS’09, pages 664–672, 2009. [6] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML’01, pages 282–289, 2001. [7] B. London, B. Huang, B. Taskar, and L. Getoor. Pac-bayes generalization bounds for randomized structured prediction. In NIPS Workshop on Perturbation, Optimization and Statistics, 2007. [8] A. F. T. Martins, N. A. Smith, M. A. T. Figueiredo, and P. M. Q. Aguiar. Structured sparsity in structured prediction. In Proceedings of EMNLP’11, pages 1500–1511, 2011. [9] F. Niu, B. Recht, C. Re, and S. J. Wright. Hogwild: A lock-free approach to parallelizing stochastic gradient descent. In NIPS’11, pages 693–701, 2011. [10] A. Quattoni, X. Carreras, M. Collins, and T. Darrell. An efficient projection for l1,infinity regularization. In Proceedings of ICML’09, page 108, 2009. [11] M. W. Schmidt and K. P. Murphy. Convex structure learning in log-linear models: Beyond pairwise potentials. In Proceedings of AISTATS’10, volume 9 of JMLR Proceedings, pages 709–716, 2010. [12] S. Shalev-Shwartz, O. Shamir, N. Srebro, and K. Sridharan. Learnability and stability in the general learning setting. In Proceedings of COLT’09, 2009. [13] L. Shen, G. Satta, and A. K. Joshi. Guided learning for bidirectional sequence classification. In Proceedings of ACL’07, 2007. [14] X. Sun. Structure regularization for structured prediction: Theories and experiments. In Technical report, arXiv, 2014. [15] X. Sun, H. Kashima, and N. Ueda. Large-scale personalized human activity recognition using online multitask learning. IEEE Trans. Knowl. Data Eng., 25(11):2551–2563, 2013. [16] X. Sun, W. Li, H. Wang, and Q. Lu. Feature-frequency-adaptive on-line training for fast and accurate natural language processing. Computational Linguistics, 40(3):563–586, 2014. [17] C. A. Sutton and A. McCallum. Piecewise pseudolikelihood for efficient training of conditional random fields. In ICML’07, pages 863–870. ACM, 2007. [18] B. Taskar, C. Guestrin, and D. Koller. Max-margin markov networks. In NIPS’03, 2003. [19] Y. Tsuruoka, Y. Miyao, and J. Kazama. Learning with lookahead: Can history-based models rival globally optimized models? In Conference on Computational Natural Language Learning, 2011. [20] H. Xue, S. Chen, and Q. Yang. Structural regularized support vector machine: A framework for structural large margin classifier. IEEE Transactions on Neural Networks, 22(4):573–587, 2011. [21] K. Yoshida and J. Tsujii. Reranking for biomedical named-entity recognition. In ACL Workshop on BioNLP, page 209´lC216, 2007. [22] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society, Series B, 68:49–67, 2006. 9
|
2014
|
205
|
5,298
|
Bayes-Adaptive Simulation-based Search with Value Function Approximation Arthur Guez∗,1,2 Nicolas Heess2 David Silver2 Peter Dayan1 ∗aguez@google.com 1Gatsby Unit, UCL 2Google DeepMind Abstract Bayes-adaptive planning offers a principled solution to the explorationexploitation trade-off under model uncertainty. It finds the optimal policy in belief space, which explicitly accounts for the expected effect on future rewards of reductions in uncertainty. However, the Bayes-adaptive solution is typically intractable in domains with large or continuous state spaces. We present a tractable method for approximating the Bayes-adaptive solution by combining simulationbased search with a novel value function approximation technique that generalises appropriately over belief space. Our method outperforms prior approaches in both discrete bandit tasks and simple continuous navigation and control tasks. 1 Introduction A fundamental problem in sequential decision making is controlling an agent when the environmental dynamics are only partially known. In such circumstances, probabilistic models of the environment are used to capture the uncertainty of current knowledge given past data; they thus imply how exploring the environment can be expected to lead to new, exploitable, information. In the context of Bayesian model-based reinforcement learning (RL), Bayes-adaptive (BA) planning [8] solves the resulting exploration-exploitation trade-off by directly optimizing future expected discounted return in the joint space of states and beliefs about the environment (or, equivalently, interaction histories). Performing such optimization even approximately is computationally highly challenging; however, recent work has demonstrated that online planning by sample-based forwardsearch can be effective [22, 1, 12]. These algorithms estimate the value of future interactions by simulating trajectories while growing a search tree, taking model uncertainty into account. However, one major limitation of Monte Carlo search algorithms in general is that, na¨ıvely applied, they fail to generalize values between related states. In the BA case, a separate value is stored for each distinct path of possible interactions. Thus, the algorithms fail not only to generalize values between related paths, but also to reflect the fact that different histories can correspond to the same belief about the environment. As a result, the number of required simulations grows exponentially with search depth. Worse yet, except in very restricted scenarios, the lack of generalization renders MC search algorithms effectively inapplicable to BAMDPs with continuous state or action spaces. In this paper, we propose a class of efficient simulation-based algorithms for Bayes-adaptive modelbased RL which use function approximation to estimate the value of interaction histories during search. This enables generalization between different beliefs, states, and actions during planning, and therefore also works for continuous state spaces. To our knowledge this is the first broadly applicable MC search algorithm for continuous BAMDPs. Our algorithm builds on the success of a recent tree-based algorithm for discrete BAMDPs (BAMCP, [12]) and exploits value function approximation for generalization across interaction histories, as has been proposed for simulation-based search in MDPs [19]. As a crucial step towards this end we develop a suitable parametric form for the value function estimates that can generalize appropriately 1 across histories, using the importance sampling weights of posterior samples to compress histories into a finite-dimensional feature vector. As in BAMCP we take advantage of root sampling [18, 12] to avoid expensive belief updates at every step of simulation, making the algorithm practical for a broad range of priors over environment dynamics. We also provide an interpretation of root sampling as an auxiliary variable sampling method. This leads to a new proof of its validity in general simulationbased settings, including BAMDPs with continuous state and action spaces, and a large class of algorithms that includes MC and TD upates. Empirically, we show that our approach requires considerably fewer simulations to find good policies than BAMCP in a (discrete) bandit task and two continuous control tasks with a Gaussian process prior over the dynamics [5, 6]. In the well-known pendulum swing-up task, our algorithm learns how to balance after just a few seconds of interaction. Below, we first briefly review the Bayesian formulation of optimal decision making under model uncertainty (section 2; please see [8] for additional details). We then explain our algorithm (section 3) and present empirical evaluations in section 4. We conclude with a discussion, including related work (sections 5 and 6). 2 Background A Markov Decision Processes (MDP) is described as a tuple M = ⟨S, A, P, R, γ⟩with S the set of states (which may be infinite), A the discrete set of actions, P : S × A × S →R the state transition probability kernel, R : S × A →R the reward function, and γ < 1 the discount factor. The agent starts with a prior P(P) over the dynamics, and maintains a posterior distribution bt(P) = P(P |ht) ∝P(ht| P)P(P), where ht denotes the history of states, actions, and rewards up to time t. The uncertainty about the dynamics of the model can be transformed into certainty about the current state inside an augmented state space S+ = H × S, where H is the set of possible histories (the current state also being the suffix of the current history). The dynamics and rewards associated with this augmented state space are described by P+(h, s, a, has′, s′) = Z P P(s, a, s′)P(P|h) dP, R+(h, s, a) = R(s, a). (1) Together, the 5-tuple M + = ⟨S+, A, P+, R+, γ⟩forms the Bayes-Adaptive MDP (BAMDP) for the MDP problem M. Since the dynamics of the BAMDP are known, it can in principle be solved to obtain the optimal value function associated with each action: Q∗(ht, st, a) = max ˜π E˜π " ∞ X t′=t γt′−trt′|at = a # ; ˜π∗(ht, st) = argmax a Q∗(ht, st, a), (2) where ˜π : S+×A →[0, 1] is a policy over the augmented state space, from which the optimal action for each belief-state ˜π∗(ht, st) can readily be derived. Optimal actions in the BAMDP are executed greedily in the real MDP M, and constitute the best course of action (i.e., integrating exploration and exploitation) for a Bayesian agent with respect to its prior belief over P. 3 Bayes-Adaptive simulation-based search Our simulation-based search algorithm for the Bayes-adaptive setup combines efficient MC search via root-sampling with value function approximation. We first explain its underlying idea, assuming a suitable function approximator exists, and provide a novel proof justifying the use of root sampling that also applies in continuous state-action BAMDPs. Finally, we explain how to model Q-values as a function of interaction histories. 3.1 Algorithm As in other forward-search planning algorithms for Bayesian model-based RL [22, 17, 1, 12], at each step t, which is associated with the current history ht (or belief) and state st, we plan online to find ˜π∗(ht, st) by constructing an action-value function Q(h, s, a). Such methods use simulation to build a search tree of belief states, each of whose nodes corresponds to a single (future) history, and estimate optimal values for these nodes. However, existing algorithms only update the nodes that are directly traversed in each simulation. This is inefficient, as it fails to generalize across multiple histories corresponding either to exactly the same, or similar, beliefs. Instead, each such history must be traversed and updated separately. 2 Here, we use a more general simulation-based search that relies on function approximation, rather than a tree, to represent the values for possible simulated histories and states. This approach was originally suggested in the context of planning in large MDPs[19]; we extend it to the case of Bayes-Adaptive planning. The Q-value of a particular history, state, and action is represented as Q(h, s, a; w), where w is a vector of learnable parameters. Fixed-length simulations are run from the current belief-state ht, st, and the parameter w is updated online, during search, based on experience accumulated along these trajectories, using an incremental RL control algorithm (e.g., Monte-Carlo control, Q-learning). If the parametric form and features induce generalization between histories, then each forward simulation can affect the values of histories that are not directly experienced. This can considerably speed up planning, and enables continuous-state problems to be tackled. Note that a search tree would be a special case of the function approximation approach when the representation of states and histories is tabular. Algorithm 1: Bayes-Adaptive simulation-based search with root sampling procedure Search( ht, st ) repeat P ∼P(P |ht) Simulate(ht, st, P, 0) until Timeout() return argmaxa Q(ht, st, a; w) end procedure procedure Simulate( h, s, P, t) if t > T then return 0 a ←˜πϵ−greedy(Q(h, s, ·; w)) s′ ∼P(s, a, ·), r ←R(s, a) R ←r + γ Simulate(has′, s′, P, t+1) w ←w −α (Q(h, s, a; w) −R) ∇wQ(h, s, a; w) return R end procedure In the context of Bayes-Adaptive planning, simulation-based search works by simulating a future trajectory ht+T = statrtst+1 . . . at+T −1rt+T −1st+T of T transitions (the planning horizon) starting from the current belief-state ht, st. Actions are selected by following a fixed policy ˜π, which is itself a function of the history, a ∼˜π(h, ·). State transitions can be sampled according to the BAMDP dynamics, st′ ∼P+(ht′−1, st′−1, at′, ht′−1at′·, ·). However, this can be computationally expensive since belief updates must be applied at every step of the simulation. As an alternative, we use root sampling [18], which only samples the dynamics Pk ∼P(P |ht) once at the root for each simulation k and then samples transitions according to st′ ∼Pk(st′−1, at′−1, ·); we provide justification for this approach in Section 3.2.1 After the trajectory hT has been simulated on a step, the Q-value is modified by updating w based on the data in ht+T . Any incremental algorithm could be used, including SARSA, Q-learning, or gradient TD [20]; we use a simple scheme to minimize an appropriately weighted squared loss E[(Q(ht′, st′, at′; w) −Rt′)2]: |∆w | = α (Q(ht′, st′, at′; w) −Rt′) ∇wQ(ht′, st′, at′; w), (3) where α is the learning rate and Rt′ denotes the discounted return obtained from history ht′.2 Algorithm 1 provides pseudo-code for this scheme; here we suggest using as the fixed policy for a simulation the ϵ−greedy ˜πϵ−greedy based on some given Q value. Other policies could be considered (e.g., the UCT policy for search trees), but are not the main focus of this paper. 3.2 Analysis In order to exploit general results on the convergence of classical RL algorithms for our simulationbased search, it is necessary to show that starting from the current history, root sampling produces the appropriate distribution of rollouts. For the purpose of this section, a simulation-based search algorithm includes Algorithm 1 (with Monte-Carlo backups) but also incremental variants, as discussed above, or BAMCP. Let D˜π t be the rollout distribution function of forward-simulations that explicitly updates the belief at each step (i.e., using P+): D˜π t (ht+T ) is the probability density that history ht+T is generated when running that simulation from ht, st, with T the horizon of the simulation, and ˜π an arbitrary history policy. Similarly define the quantity ˜ Dt ˜π(ht+T ) as the probability density that history ht+T is generated when running forward-simulations with root sampling, as in Algorithm 1. The following lemma shows that these two rollout distributions are the same. 1For comparison, a version of the algorithm without root sampling is listed in the supplementary material. 2The loss is weighted according to the distr. of belief-states visited from the current state by executing ˜π. 3 Lemma 1. D˜π t (ht+T ) = ˜D ˜π t (ht+T ) for all policies ˜π : H × A →[0, 1] and for all ht+T ∈H of length t + T. Proof. A similar result has been obtained for discrete state-action spaces as Lemma 1 in [12] using an induction step on the history length. Here we provide a more intuitive interpretation of root sampling as an auxiliary variable sampling scheme which also applies directly to continuous spaces. We show the equivalence by rewriting the distribution of rollouts. The usual way of sampling histories in simulation-based search, with belief updates, is justified by factoring the density as follows: p(ht+T |ht, ˜π) = p(atst+1at+1st+2 . . . st+T |ht, ˜π) (4) = p(at|ht, ˜π)p(st+1|ht, ˜π, at)p(at+1|ht+1, ˜π) . . . p(st+T |ht+T −1, at+T , ˜π) (5) = Y t≤t′<t+T ˜π(ht′, at′) Y t<t′≤t+T p(st′|ht′−1, ˜π, at′−1) (6) = Y t≤t′<t+T ˜π(ht′, at′) Y t<t′≤t+T Z P P(P |ht′−1) P(st′−1, at′−1, st′) dP, (7) which makes clear how each simulation step involves a belief update in order to compute (or sample) the integrals. Instead, one may write the history density as the marginalization of the joint over history and the dynamics P, and then notice that an history is generated in a Markovian way if conditioned on the dynamics: p(ht+T |ht, ˜π) = Z P p(ht+T | P, ht, ˜π)p(P |ht, ˜π) dP = Z P p(ht+T | P, ˜π)p(P |ht) dP (8) = Z P Y t≤t′<t+T ˜π(ht′, at′) Y t<t′≤t+T P(st′−1, at′−1, st′) p(P |ht) dP, (9) where eq. (9) makes use of the Markov assumption in the MDP. This makes clear the validity of sampling only from p(P |ht), as in root sampling. From these derivations, it is immediately clear that D˜π t (ht+T ) = ˜D ˜π t (ht+T ). The result in Lemma 1 does not depend on the way we update the value Q, or on its representation, since the policy is fixed for a given simulation.3Furthermore, the result guarantees that simulationbased searches will be identical in distribution with and without root sampling. Thus, we have: Corollary 1. Define a Bayes-adaptive simulation-based planning algorithm as a procedure that repeatedly samples future trajectories ht+T ∼D˜π t from the current history ht (simulation phase), and updates the Q value after each simulation based on the experience ht+T (special cases are Algorithm 1 and BAMCP). Then such a simulation-based algorithm has the same distribution of parameter updates with or without root sampling. This also implies that the two variants share the same fixed-points, since the updates match in distribution. For example, for a discrete environment we can choose a tabular representation of the value function in history space. Applying the MC updates in eq. (3) results in a MC control algorithm applied to the sub-BAMDP from the root state. This is exactly the (BA version of the) MC tree search algorithm [12]. The same principle can also be applied to MC control with function approximation with convergence results under appropriate conditions [2]. Finally, more general updates such as gradient Q-learning could be applied with corresponding convergence guarantees [14]. 3.3 History Features and Parametric Form for the Q-value The quality of a history policy obtained using simulation-based search with a parametric representation Q(h, s, a; w) crucially depends on the features associated with the arguments of Q, i.e., the history, state and action. These features should arrange for histories that lead to the same, or similar, beliefs have the same, or similar, representations, to enable appropriate generalization. This is challenging since beliefs can be infinite-dimensional objects with non-compact sufficient statistics that are therefore hard to express or manipulate. Learning good representations from histories is also tough, for instance because of hidden symmetries (e.g., the irrelevance of the order of the experience tuples that lead to a particular belief). 3Note that, in Algorithm 1, Q is only updated after the simulation is complete. 4 We propose a parametric representation of the belief at a particular planning step based on sampling. That is, we draw a set of M independent MDP samples or particles U = {P1, P2, . . . , PM} from the current belief bt = P(P |ht), and associate each with a weight zU m(h), such that the vector zU(h) is a finite-dimensional approximate representation of the belief based on the set U. We will also refer to zU as a function zU : H →RM that maps histories to a feature vector. There are various ways one could design the zU function. It is computationally convenient to compute zU(h) recursively as importance weights, just as in a sequential importance sampling particle filter [11]; this only assumes we have access to the likelihood of the observations (i.e., state transitions). In other words, the weights are initialized as zU m(ht) = 1 M ∀m and are then updated recursively using the likelihood of the dynamics model for that particle of observations as zU m(has′) ∝zU m(h)P(s′|a, s, Pm) = zU m(h) Pm(s, a, s′). One advantage of this definition is that it enforces a correspondence between the history and belief representations in the finite-dimensional space, in the sense that zU(h′) = zU(h) if belief(h) = belief(h′). That is, we can work in history space during planning, alleviating the need for complete belief updates, but via a finite and well-behaved representation of the actual belief — since different histories corresponding to the same belief are mapped to the same representation. This feature vector can be combined with any function approximator. In our experiments, we combine it with features of the current state and action, φ(s, a), in a simple bilinear form: Q(h, s, a; W) = zU(h)T W φ(s, a), (10) where W is the matrix of learnable parameters adjusted during the search (eq. 3). Here φ(s, a) is a domain-dependent state-action feature vector as is standard in fully observable settings with function approximation. Special cases include tabular representations or forms of tile coding. We discuss the relation of this parametric form to the true value function in the Supp. material. In the next section, we investigate empirically in three varied domains the combination of this parametric form, simulation-based search and Monte-Carlo backups, collectively known as BAFA (for Bayes Adaptive planning with Function Approximation). 4 Experimental results α β 2 4 6 8 10 5 10 15 0.2 0.4 0.6 0.8 (a) mα,β 10 3 10 4 10 5 0 0.5 1 1.5 2 Number of simulations Weighted decision error BAFA, M=2 BAFA, M=5 BAFA, M=25 BAMCP (Tree−search) Posterior Mean (b) Figure 1: a) The weights mα,β b) Averaged (weighted) decision errors for the different methods as a function of the number of simulations. The discrete Bernoulli bandit domain (section 4.1) demonstrates dramatic efficiency gains due to generalization with convergence to a near Bayes-optimal solution. The navigation task (section 4.2) and the pendulum (section 4.3) demonstrate the ability of BAFA to handle non-trivial planning horizons for large BAMDPs with continuous states. We provide comparisons to a state of the art BA tree-search algorithm (BAMCP, [12]), choosing a suitable discretization of the state space for the continuous problems. For the pendulum we also compare to two Bayesian, but not Bayes adaptive, approaches. 4.1 Bernoulli Bandit Bandits have simple dynamics, yet they are still challenging for a generic Bayes-Adaptive planner. Importantly, ground truth is sometimes available [10], so we can evaluate how far the approximations are from Bayes-optimality. We consider a 2-armed Bernoulli bandit problem. We oppose an uncertain arm with prior success probability p1 ∼ Beta(α, β) against an arm with known success probability p0. We consider the scenario γ = 0.99, p0 = 0.2 for which the optimal decision, and the posterior mean decision frequently differ. Decision errors for different values of α, β do not have the same consequence, so we weight each scenario according to the difference between their associated Gittins indices. Define the weight as mα,β = |gα,β −p0| where gα,β is the Gittins index for α, β; this is an upper-bound (up to a scaling factor) on the difference between the value of the arms. The weights are shown in Figure 1-a. 5 We compute the weighted errors over 20 runs for a particular method as Eα,β = mα,β · P(Wrong decision for (α, β)), and report the sum of these terms across the range 1 ≤α ≤10 and 1 ≤β ≤19 in Figure 1-b as a function of the number of simulations. Though this is a discrete problem, these results show that the value function approximation approach, even with a limited number of particles (M) for the history features, learns considerably more quickly than BAMCP . This is because BAFA generalizes between similar beliefs. 4.2 Height map navigation We next consider a 2-D navigation problem on an unknown continuous height map. The agent’s state is (x, y, z, θ), it moves on a bounded region of the (x, y) ∈8 × 8m plane according to (known) noisy dynamics. The agent chooses between 5 different actions, the dynamics for (x, y) are (xt+1, yt+1) = (xt, yt) + l(cos(θa), sin(θa)) + ϵϵϵ, where θa corresponds to the action from this set θa ∈θ + {−π 3 , −π 6 , 0, π 6 , π 3 }, ϵϵϵ is small isotropic Gaussian noise (σ = 0.05), and l = 1 3m is the step size. Within the bounded region, the reward function is the value of a latent height map z = f(x, y) which is only observed at a single point by the agent. The height map is a draw from a Gaussian process (GP), f ∼GP(0, K), using a multi-scale squared exponential kernel for the covariance matrix and zero mean. In order to test long-horizon planning, we downplay situations where the agents can simply follow the expected gradient locally to reach high reward regions by starting the agent on a small local maximum. To achieve this we simply condition the GP draw on a few pseudo-observations with small negative z around the agent and a small positive z at the starting position, which creates a small bump (on average). The domain is illustrated in Figure 2-a with an example map. We compare BAMCP against BAFA on this domain, planning over 75 steps with a discount of 0.98. Since BAMCP works with discrete state, we uniformly discretize the height observations. For the state-features in BAFA, we use a regular tile coding of the space; an RBF network leads to similar results. We use a common set of a 100 ground truth maps drawn from the prior for each algorithm/setting, and we average the discounted return over 200 runs (2 runs/map) and report that result in Figure 2-b as a function of the planning horizon (T). This result illustrates the ability of BAFA to cope with non-trivial planning horizons in belief space. Despite the discretization, BAMCP is very efficient with short planning horizons, but has trouble optimizing the history policy with long horizons because of the huge tree induced by the discretization of the observations. (a) 0 5 10 15 20 25 10 15 20 25 30 35 40 Planning horizon Discounted return BAMCP K=2000 BAMCP K=5000 BAMCP K=15000 BAFA K=2000 BAFA K=5000 BAFA K=15000 (b) Figure 2: (a) Example map showing with the height color-coded from white (negative reward z) to black (positive reward z). The black dots denote the location of the initial pseudo-observations used to obtain the ground truth map. The white squares show the past trajectory of the agent, starting at the cross and ending at the current position in green. The green trajectory is one particular forward simulation of BAFA from that position. (b) Averaged discounted return (higher is better) in the navigation domain for discretized BAMCP and BAFA as a function of the number of simulations (K), and as function of the planning horizon (x-axis). 4.3 Under-actuated Pendulum Swing-up Finally, we consider the classic RL problem in which an agent must swing a pendulum from hanging vertically down to balancing vertically up, but given only limited torque. This requires the agent to build up momentum by swinging, before being able to balance. Note that although a wide variety of methods can successfully learn this task given enough experience, it is a challenging domain for Bayes-adaptive algorithms, which have duly not been tried. 6 We use conventional parameter settings for the pendulum [5], a mass of 1kg, a length of 1m, a maximum torque of 5Nm, and coefficient of friction of 0.05 kg m2 / s. The state of the pendulum is s = (θ, ˙θ). Each time-step corresponds to 0.05s, γ = 0.98, and the reward function is R(s) = cos(θ). In the initial state, the pendulum is pointing down with no velocity, s0 = (π, 0). Three actions are available to the agent, to apply a torque of either {−5, 0, 5}Nm. The agent does not initially know the dynamics of the pendulum. As in [5], we assume it employs independent Gaussian processes to capture the state change in each dimension for a given action. That is, si t+1 −si t ∼ GP(mi a, Ki a) for each state dimension i and each action a (where Ki a are Squared Exponential kernels). Since there are 2 dimensions and 3 actions, we maintain 6 Gaussian processes, and plan in the joint space of (θ, ˙θ) together with the possible future GP posteriors to decide which action to take at any given step. We compare four approaches on this problem to understand the contributions of both generalization and Bayes-Adaptive planning to the performance of the agent. BAFA includes both; we also consider two non-Bayes-adaptive variants using the same simulation-based approach with value generalization. In a Thompson Sampling variant (THOMP), we only consider a single posterior sample of the dynamics at each step and greedily solve using simulation-based search. In an exploit-only variant (FA), we run a simulation-based search that optimizes a state-only policy over the uncertainty in the dynamics, this is achieved by running BAFA with no history feature.4 For BAFA, FA, and THOMP, we use the same RBF network for the state-action features, consisting of 900 nodes. In addition, we also consider the BAMCP planner with an uniform discretization of the θ, ˙θ space that worked best in a coarse initial search; this method performs Bayes-adaptive planning but with no value generalization. 0 5 10 15 20 0 0.1 0.2 BAFA > 20 0 0.5 1 0 5 10 15 20 0 0.1 0.2 BAMCP > 20 0 0.5 1 0 5 10 15 20 0 0.1 0.2 FA > 20 0 0.5 1 0 5 10 15 20 0 0.1 0.2 THOMP Fraction Time (s) > 20 0 0.5 1 (a) 0 5 10 15 20 0 0.1 0.2 BAFA > 20 0 0.5 1 0 5 10 15 20 0 0.1 0.2 BAMCP > 20 0 0.5 1 0 5 10 15 20 0 0.1 0.2 FA > 20 0 0.5 1 0 5 10 15 20 0 0.1 0.2 THOMP Fraction Time (s) > 20 0 0.5 1 (b) Figure 3: Histogram of delay until the agent reaches its first balance state (|θ| < π 4 for ≥3s) for different methods in the pendulum domain. (a) A standard version of the pendulum problem with a cosine cost function. (b) A more difficult version of the problem with uncertain cost for balancing (see text). There is a 20s time limit, so all runs which do not achieve balancing within that time window are reported in the red bar. The histogram is computed with 100 runs with (a) K = 10000, or (b) K = 15000, simulations for each algorithm, horizon T = 50 and (for BAFA) M = 50 particles. The black dashed line represents the median of the distribution. We allow each algorithm a maximum of 20s of interaction with the pendulum, and consider as upstate any configuration of the pendulum for which |θ| ≤π 4 and we consider the pendulum balanced if it stays in an up-state for more than 3s. We report in Figure 3-a the time it takes for each method to reach for the first time a balanced state. We observe that Bayes-adaptive planning (BAFA or BAMCP) outperforms more heuristic exploration methods, with most runs balancing before 8.5s. In the Suppl. material, Figure S1 shows traces of example runs. With the same parametrization of the pendulum, Deisenroth et al. reported balancing the pole after between 15 and 60 seconds of interaction when assuming access to a restart distribution [5]. More recently, Moldovan et al. reported balancing after 12-18s of interaction using a method tailored for locally linear dynamics [15]. However, the pendulum problem also illustrates that BA planning for this particular task is not hugely advantageous compared to more myopic approaches to exploration. We speculate that this 4The approximate value function for FA and THOMP thus takes the form Q(s, a) = wT φ(s, a). 7 is due to a lack of structure in the problem and test this with a more challenging, albeit artificial, version of the pendulum problem that requires non-myopic planning over longer horizons. In this modified version, balancing the pendulum (i.e., being in the region |θ| < π 4 ) is either rewarding (R(s) = 1) with probability 0.5, or costly (R(s) = −1) with probability 0.5; all other states have an associated reward of 0. This can be modeled formally by introducing another binary latent variable in the model. These latent dynamics are observed with certainty if the pendulum reaches any state where |θ| ≥3π 4 . The rest of the problem is the same. To approximate correctly the Bayes-optimal solution in this setting, the planning algorithm must optimize the belief-state policy after it simulates observing whether balancing is rewarding or not. We run this version of the problem with the same algorithms as above and report the results in Figure 3-b. This hard planning problem highlights more clearly the benefits of Bayes-adaptive planning and value generalization. Our approach manages to balance the pendulum more 80% of the time, compared to about 35% for BAMCP, while THOMP and FA fail to balance for almost all runs. In the Suppl. material, Figure S2 illustrates the influence of the number of particles M on the performance of BAFA. 5 Related Work Simulation-based search with value function approximation has been investigated in large and also continuous MDPs, in combination with TD-learning [19] or Monte-Carlo control [3]. However, this has not been in a Bayes-adaptive setting. By contrast, existing online Bayes-Adaptive algorithms [22, 17, 1, 12, 9] rely on a tree structure to build a map from histories to value. This cannot benefit from generalization in a straightforward manner, leading to the inefficiencies demonstrated above and hindering their application to the continuous case. Continuous Bayes-Adaptive (PO)MDPs have been considered using an online Monte-Carlo algorithm [4]; however this tree-based planning algorithm expands nodes uniformly, and does not admit generalization between beliefs. This severely limits the possible depth of tree search ([4] use a depth of 3). In the POMDP literature, a key idea to represent beliefs is to sample a finite set of (possibly approximate) belief points [21, 16] from the set of possible beliefs in order to obtain a small number of (belief-)states for which to backup values offline or via forward search [13]. In contrast, our sampling approach to belief representation does not restrict the number of (approximate) belief points since our belief features (z(h)) can take an infinite number of values, but it instead restricts their dimension, thus avoiding infinite-dimensional belief spaces. Wang et al.[23] also use importance sampling to compute the weights of a finite set of particles. However, they use these particles to discretize the model space and thus create an approximate, discrete POMDP. They solve this offline with no (further) generalization between beliefs, and thus no opportunity to re-adjust the belief representation based on past experience. A function approximation scheme in the context of BA planning has been considered by Duff [7], in an offline actor-critic paradigm. However, this was in a discrete setting where counts could be used as features for the belief. 6 Discussion We have introduced a tractable approach to Bayes-adaptive planning in large or continuous state spaces. Our method is quite general, subsuming Monte Carlo tree search methods, while allowing for arbitrary generalizations over interaction histories using value function approximation. Each simulation is no longer an isolated path in an exponentially growing tree, but instead value backups can impact many non-visited beliefs and states. We proposed a particular parametric form for the action-value function based on a Monte-Carlo approximation of the belief. To reduce the computational complexity of each simulation, we adopt a root sampling method which avoids expensive belief updates during a simulation and hence poses very few restrictions on the possible form of the prior over environment dynamics. Our experiments demonstrated that the BA solution can be effectively approximated, and that the resulting generalization can lead to substantial gains in efficiency in discrete tasks with large trees. We also showed that our approach can be used to solve continuous BA problems with non-trivial planning horizons without discretization, something which had not previously been possible. Using a widely used GP framework to model continuous system dynamics (for the case of a swing-up pendulum task), we achieved state-of the art performance. Our general framework can be applied with more powerful methods for learning the parameters of the value function approximation, and it can also be adapted to be used with continuous actions. We expect that further gains will be possible, e.g. from the use of bootstrapping in the weight updates, alternative rollout policies, and reusing values and policies between (real) steps. 8 References [1] J. Asmuth and M. Littman. Approaching Bayes-optimality using Monte-Carlo tree search. In Proceedings of the 27th Conference on Uncertainty in Artificial Intelligence, pages 19–26, 2011. [2] Dimitri P Bertsekas. Approximate policy iteration: A survey and some new methods. Journal of Control Theory and Applications, 9(3):310–335, 2011. [3] SRK Branavan, D. Silver, and R. Barzilay. Learning to win by reading manuals in a Monte-Carlo framework. Journal of Artificial Intelligence Research, 43:661–704, 2012. [4] P. Dallaire, C. Besse, S. Ross, and B. Chaib-draa. Bayesian reinforcement learning in continuous POMDPs with Gaussian processes. In Intelligent Robots and Systems, 2009. IROS 2009. IEEE/RSJ International Conference on, pages 2604–2609. IEEE, 2009. [5] Marc Peter Deisenroth, Carl Edward Rasmussen, and Jan Peters. Gaussian process dynamic programming. Neurocomputing, 72(7):1508–1524, 2009. [6] MP Deisenroth and CE Rasmussen. PILCO: A model-based and data-efficient approach to policy search. In Proceedings of the 28th International Conference on Machine Learning, pages 465–473. International Machine Learning Society, 2011. [7] M. Duff. Design for an optimal probe. In Proceedings of the 20th International Conference on Machine Learning, pages 131–138, 2003. [8] M.O.G. Duff. Optimal Learning: Computational Procedures For Bayes-Adaptive Markov Decision Processes. PhD thesis, University of Massachusetts Amherst, 2002. [9] Raphael Fonteneau, Lucian Busoniu, and R´emi Munos. Optimistic planning for belief-augmented Markov decision processes. In IEEE International Symposium on Adaptive Dynamic Programming and reinforcement Learning (ADPRL 2013), 2013. [10] J.C. Gittins, R. Weber, and K.D. Glazebrook. Multi-armed bandit allocation indices. Wiley Online Library, 1989. [11] Neil J Gordon, David J Salmond, and Adrian FM Smith. Novel approach to nonlinear/non-Gaussian Bayesian state estimation. In IEE Proceedings F (Radar and Signal Processing), volume 140, pages 107–113, 1993. [12] A. Guez, D. Silver, and P. Dayan. Efficient Bayes-adaptive reinforcement learning using sample-based search. In Advances in Neural Information Processing Systems (NIPS), pages 1034–1042, 2012. [13] Hanna Kurniawati, David Hsu, and Wee Sun Lee. SARSOP: Efficient point-based POMDP planning by approximating optimally reachable belief spaces. In Robotics: Science and Systems, pages 65–72, 2008. [14] H.R. Maei, C. Szepesv´ari, S. Bhatnagar, and R.S. Sutton. Toward off-policy learning control with function approximation. Proc. ICML 2010, pages 719–726, 2010. [15] Teodor Mihai Moldovan, Michael I Jordan, and Pieter Abbeel. Dirichlet Process reinforcement learning. In Reinforcement Learning and Decision Making Meeting, 2013. [16] J. Pineau, G. Gordon, and S. Thrun. Point-based value iteration: An anytime algorithm for POMDPs. In International Joint Conference on Artificial Intelligence, volume 18, pages 1025–1032, 2003. [17] S. Ross and J. Pineau. Model-based bayesian reinforcement learning in large structured domains. In Proc. 24th Conference in Uncertainty in Artificial Intelligence (UAI08), pages 476–483, 2008. [18] D. Silver and J. Veness. Monte-Carlo planning in large POMDPs. In Advances in Neural Information Processing Systems (NIPS), pages 2164–2172, 2010. [19] David Silver, Richard S Sutton, and Martin M¨uller. Temporal-difference search in computer go. Machine learning, 87(2):183–219, 2012. [20] R. S. Sutton, H. R. Maei, D. Precup, S. Bhatnagar, D. Silver, C. Szepesv´ari, and E. Wiewiora. Fast gradient-descent methods for temporal-difference learning with linear function approximation. In Proceedings of the 26th Annual International Conference on Machine Learning, ICML 2009, volume 382, page 125, 2009. [21] Sebastian Thrun. Monte Carlo POMDPs. In NIPS, volume 12, pages 1064–1070, 1999. [22] T. Wang, D. Lizotte, M. Bowling, and D. Schuurmans. Bayesian sparse sampling for on-line reward optimization. In Proceedings of the 22nd International Conference on Machine learning, pages 956–963, 2005. [23] Y. Wang, K.S. Won, D. Hsu, and W.S. Lee. Monte Carlo Bayesian reinforcement learning. In Proceedings of the 29th International Conference on Machine Learning, 2012. 9
|
2014
|
206
|
5,299
|
Optimal Teaching for Limited-Capacity Human Learners Kaustubh Raosaheb Patil Affective Brain Lab, UCL & MIT Sloan Neuroeconomics Lab kaustubh.patil@gmail.com Xiaojin Zhu Department of Computer Sciences University of Wisconsin-Madison jerryzhu@cs.wisc.edu Łukasz Kope´c Experimental Psychology University College London l.kopec.12@ucl.ac.uk Bradley C. Love Experimental Psychology University College London b.love@ucl.ac.uk Abstract Basic decisions, such as judging a person as a friend or foe, involve categorizing novel stimuli. Recent work finds that people’s category judgments are guided by a small set of examples that are retrieved from memory at decision time. This limited and stochastic retrieval places limits on human performance for probabilistic classification decisions. In light of this capacity limitation, recent work finds that idealizing training items, such that the saliency of ambiguous cases is reduced, improves human performance on novel test items. One shortcoming of previous work in idealization is that category distributions were idealized in an ad hoc or heuristic fashion. In this contribution, we take a first principles approach to constructing idealized training sets. We apply a machine teaching procedure to a cognitive model that is either limited capacity (as humans are) or unlimited capacity (as most machine learning systems are). As predicted, we find that the machine teacher recommends idealized training sets. We also find that human learners perform best when training recommendations from the machine teacher are based on a limited-capacity model. As predicted, to the extent that the learning model used by the machine teacher conforms to the true nature of human learners, the recommendations of the machine teacher prove effective. Our results provide a normative basis (given capacity constraints) for idealization procedures and offer a novel selection procedure for models of human learning. 1 Introduction Judging a person as a friend or foe, a mushroom as edible or poisonous, or a sound as an \l\ or \r\ are examples of categorization tasks. Category knowledge is often acquired based on examples that are either provided by a teacher or past experience. One important research challenge is determining the best set of examples to provide a human learner to facilitate learning and use of knowledge when making decisions, such as classifying novel stimuli. Such a teacher would be helpful in a pedagogical setting for curriculum design [1, 2]. Recent work suggests that people’s categorization decisions are guided by a small set of examples retrieved at the time of decision [3]. This limited and stochastic retrieval places limits on human performance for probabilistic classification decisions, such as predicting the winner of a sports contest or classifying a mammogram as normal or tumorous [4]. In light of these capacity limits, Gigu`ere and Love [3] determined and empirically verified that humans perform better at test after being 1 trained on idealized category distributions that minimize the saliency of ambiguous cases during training. Unlike machine learning systems that can have unlimited retrieval capacity, people performed better when trained on non-representative samples of category members, which is contrary to common machine learning practices where the aim is to match training and test distributions [5]. One shortcoming of previous work in idealization is that category distributions were idealized in an ad hoc or heuristic fashion, guided only by the intuitions of the experimenters in contrast to a rigorous systematic approach. In this contribution, we take a first principles approach to constructing idealized training sets. We apply a machine teaching procedure [6] to a cognitive model that is either limited capacity (as humans are) or unlimited capacity (as most machine learning systems are). One general prediction is that the machine teacher will idealize training sets. Such a result would establish a conceptual link between idealization manipulations from psychology and optimal teaching procedures from machine learning [7, 6, 8, 2, 9, 10, 11]. A second prediction is that human learners will perform best with training sets recommended by a machine teacher that adopts a limited capacity model of the learner. To the extent that the learning model used by the machine teacher conforms to the true nature of human learners, the recommendations of the machine teacher should prove more effective. This latter prediction advances a novel method to evaluate theories of human learning. Overall, our work aims to provide a normative basis (given capacity constraints) for idealization procedures. 2 Limited- and Infinite-Capacity Models Although there are many candidate models of human learning (see [12] for a review), to cement the connection with prior work [3] and to facilitate evaluation of model variants differing in capacity limits, we focus on exemplar models of human learning. Exemplar models have proven successful in accounting for human learning performance [13, 14], are consistent with neural representations of acquired categories [15], and share strong theoretical connections with machine learning approaches [16, 17]. Exemplar models represent categories as a collection of experienced training examples. At the time of decision, category examples (i.e., exemplars) are activated (i.e., retrieved) in proportion to their similarity to the stimulus. The category with the greatest total similarity across members tends to be chosen as the category response. Formally, the categorization problem is to estimate the label ˆy of a test item x from its similarity with the training exemplars {(x1, y1), . . . , (xn, yn)}. Exemplar models are consistent with the notion that people stochastically and selectively sample from memory at the time of decision. For example, in the Exemplar-Based Random Walk (EBRW) model [18], exemplars are retrieved sequentially and stochastically as a function of their similarity to the stimulus. Retrieved exemplars provide evidence for category responses. When accumulated evidence (i.e., retrieved exemplars) for a response exceeds a threshold, the corresponding response is made. The number of steps in the diffusion process is the predicted response time. One basic feature of EBRW is that not all exemplars in memory need feed into the decision process. As discussed by Gigu`ere and Love [3], finite decision thresholds in EBRW can be interpreted as a capacity limit in memory retrieval. When decision thresholds are finite, a limited number of exemplars are retrieved from memory. When capacity is limited in this fashion, models perform better when training sets are idealized. Idealization reduces the noise injected into the decision process by limited and stochastic sampling of information in memory. We aim to show that a machine teacher, particularly one using a limited-capacity model of the learner, will idealize training sets. Such a result would provide a normative basis (given capacity constraints) for idealization procedures. To evaluate our predictions, we formally specify a limitedand unlimited-capacity exemplar model. Rather than work with EBRW, we instead choose a simpler mathematical model, the Generalized Context Model (GCM, [14]), which offers numerous advantages for our purposes. As discussed below, a parameter in GCM can be interpreted as specifying capacity and can be related to decision threshold placement in EBRW’s drift-diffusion process. Given a finite training set (or a teaching set, we will use the two terms interchangeably) D = {(x1, y1), . . . , (xn, yn)} and a test item (i.e., stimulus) x, GCM estimates the label probability as: ˆp(y = 1 | x, D) = b + P i∈D:yi=1 e−c d(x,xi)γ b + P i∈D:yi=1 e−c d(x,xi) γ + b + P i∈D:yi=−1 e−c d(x,xi) γ (1) 2 where d is the distance function that specifies the distance (e.g., the difference in length between two line stimuli) between the stimulus x and exemplar xi, c is a scaling parameter that specifies the rate at which similarity decreases with distance (i.e. the bandwidth parameter for a kernel), and the parameter b is background similarity, which is related to irrelevant information activated in memory. Critically, the response scaling parameter, γ, has been shown to bear a relationship to decision threshold placement in EBRW [18]. In particular, Equation 1 is equivalent to EBRW’s mean response (averaged over many trials) with decision threshold bounds placed γ units away for the starting point for evidence accumulation. Thus, GCM with a low value of γ can be viewed as a limited capacity model, whereas GCM with a high value for γ converges to the predictions of an infinite capacity model. These two model variations (low and high γ as surrogates for low- and high-capacity) will figure prominently in our study and analyses. To select a binary response, the learner samples a label according to the probability ˆy ∼ Bernoulli(ˆp(y = 1 | x, D)). Therefore, the learner makes stochastic predictions. When measuring the classification error of the learner, we will take expectation over this randomness. Let the distance function be d(xi, xj) = |xi −xj|. Thus a GCM learner can be represented using three parameters {b, c, γ}. 3 Machine Teaching for the GCM Learners Machine teaching is an inverse problem of machine learning. Given a learner and a test distribution, machine teaching designs a small (typically non-iid) teaching set D such that the learner trained on D has the smallest test error [6]. The machine teaching framework poses an optimization problem: min D∈D loss(D) + effort(D). (2) The optimization is over D, the teaching set that we present to the learner. For our task, D = (x1, y1), . . . , (xn, yn) where xi ∈[0, 1] represents the 1D feature of the ith stimulus, and yi ∈ {−1, 1} represents the ith label. The search space D = {(X × Y)n : n ∈N} is the (infinite) set of finite teaching sets. Importantly, D is not required to consist of iid items drawn from the test distribution p(x, y). Rather, D will usually contain specially arranged items. This is a major difference to standard machine learning. Since we want to minimize classification error on future test items, we define the teaching loss function to be the generalization error: loss(D) = E(x,y)∼p(x,y)Eˆy∼ˆp(y|x,D)1y̸=ˆy. (3) The first expectation is with respect to the test distribution p(x, y). That is, we still assume that test items are drawn iid from the test distribution. The second expectation is w.r.t. the stochastic predictions that the GCM learner makes. Note that the teaching set D enters the loss() function through the GCM model ˆp(y | x, D) in (1). We observe that: loss(D) = Ex∼p(x) p(y = 1 | x)ˆp(y = −1 | x, D) + p(y = −1 | x)ˆp(y = 1 | x, D) = Z 1 −2p(y = 1 | x) 1 + b+P i∈D:yi=−1 e−c d(x,xi) b+P i∈D:yi=1 e−c d(x,xi) γ + p(y = 1 | x) p(x)dx. (4) The teaching effort function effort(D) is a powerful way to specify certain preferences on the teaching set space D. For example, if we use effort(D) = |D| the size of D then the machine teaching problem (2) will prefer smaller teaching sets. In this paper, we use a simple definition of effort(): effort(D) = 0 if |D| = n, and ∞otherwise. This infinity indicator function simply acts as a hard constraint so that D must have exactly n items. Equivalently, we may drop this effort() term from (2) altogether while requiring the search space D to consist of teaching sets of size exactly n. In this paper, we consider test distributions p(x, y) whose marginal on x has a special form. Specifically, we assume that p(x) is a uniform distribution over m distinct test stimuli z1, . . . , zm ∈[0, 1]. In other words, there are only m distinct test stimuli. The test label y for stimuli zj in any given test set is randomly sampled from p(y | zj). Besides matching the actual behavioral experiments, 3 this discrete marginal test distribution affords a further simplification to our teaching problem: the integral in (4) is replaced with summation: min x1...xn∈[0,1];y1...yn∈{−1,1} 1 m m X j=1 1 −2p(y = 1 | zj) 1 + b+P i:yi=−1 e−c d(zj ,xi) b+P i:yi=1 e−c d(zj ,xi) γ + p(y = 1 | zj) . (5) It is useful to keep in mind that y1 . . . yn are the training item labels that we can design, while y is a dummy variable for the stochastic test label. In fact, equation (5) is a mixed integer program because we design both the continuous training stimuli x1 . . . xn and the discrete training labels y1 . . . yn. It is computationally challenging. We will relax this problem to arrive at our final optimization problem. We consider a smaller search space D where each training item label yi is uniquely determined by the position of xi w.r.t. the true decision boundary θ∗= 0.5. That is, yi = 1 if xi ≥θ∗and yi = −1 if xi < θ∗. We do not have evidence that this reduced freedom in training labels adversely affect the power of the teaching set solution. We now removed the difficult discrete optimization aspect, and arrive at the following continuous optimization problem to find an optimal teaching set (note the changes to selector variables i): min x1...xn∈[0,1] 1 m m X j=1 1 −2p(y = 1 | zj) 1 + b+P i:xi<0.5 e−c d(zj ,xi) b+P i:xi≥0.5 e−c d(zj ,xi) γ + p(y = 1 | zj) . (6) 4 Experiments Using the machine teacher, we derive a variety of optimal training sets for low- and high-capacity GCM learners. We then evaluate how humans perform when trained on these recommended items (i.e. training sets). The main predictions are that the machine teacher will idealize training sets and that humans will perform better on optimal training sets calculated using the low-capacity GCM variant. In what follows, we first specify parameter values for the GCM variants, present the optimal teaching sets we calculate, and then discuss human experiments. 4.1 Specifying GCM parameters The machine teacher requires a full specification of the learner, including its parameters. Parameters were set for the low-capacity GCM model by fitting the behavioral data from Experiment 2 of Gigu`ere and Love [3]. GCM was fit to the aggregated data representing an average human learner by solving the following optimization problem: {ˆb, ˆc, ˆγ} = arg min ˆb,ˆc,ˆγ X i∈X(1) g(1)(xi) −f (1)(xi) 2 + X j∈X(2) g(2)(xj) −f (2)(xj) 2 (7) where X(1) and X(2) are sets of unique test stimuli for the two training conditions (actual and idealized) in Experiment 2. We define two functions to describe the estimated and empirical probabilities, respectively: g(cond)(xi) = p(yi = 1 | xi, D(cond)), f (cond)(xi) = P j∈D(cond):yj =1 1(xj=xi) P j′∈D(cond) 1(x′ j=xi) . The function g above is defined using GCM in Equation 1. We solved Equation 7 to obtain the lowcapacity GCM parameters that best capture human performance {ˆb, ˆc, ˆγ} = {5.066, 2.964, 4.798}. We define a high-capacity GCM by only changing the ˆγ parameter, which is set an order of magnitude higher at ˆγ = 47.98. 4.2 Optimal Teaching Sets The machine teacher was used to generate a variety of training sets that we evaluated on human learners. All training sets had size n = 20, which was chosen to maximize expected differences in human test performance across training sets. All conditions involved the same test conditional 4 0.00 0.25 0.50 0.75 1.00 p(y = 1 | z) 0.00 0.25 0.50 0.75 1.00 z Figure 1: The test conditional distribution. Each point shows a test item zi and its conditional probability to be in the category y = 1. The vertical dashed line shows the location of the true decision boundary θ∗= 0.5. distribution p(y | x) (see Figure 1). The test set consisted of m = 60 representative items evenly spaced over the stimulus domain [0, 1] with a probabilistic category structure. The conditional distribution p(y = 1 | x = zj) for j = 1 . . . 60 was adapted from a related study [3]. We then solved the machine teaching problem (6) to obtain the optimal teaching sets for low- and high-capacity learners. The optimal training set for the low-capacity GCM places items for each category in a clump far from the boundary (see Figure 2 for the optimal training sets). We refer to this optimal training set as Clump-Far. The placement of these items far from the boundary reflects the low-capacity (i.e., low γ value) of the GCM. By separating the items from the two categories, the machine teacher makes it less likely that low-capacity GCM will erroneously retrieve items from the opposing category at the time of test. As predicted, the machine teacher idealized the Clump-Far training set. A mathematical property of the high-capacity GCM suggests that it is sensitive only to the placement of training items adjacent to the decision boundary θ∗(all other training items have exponentially small influence). Therefore, for the high-capacity model up to computer precision, there is no unique optimal teaching set but rather a family of optimal sets (i.e., multiple teaching sets with the same loss or expected test error). We generated two training sets that are both optimal for the highcapacity model. The Clump-Near training set has one clump of similar items for each category close to the boundary. In contrast, the Spread training set uniformly spaces items outward, mimicking the idealization procedure in Gigu`ere and Love [3]. We also generated Random teaching sets by sampling from the joint distribution U(x)p(y | x), where U(x) is uniform in [0, 1] and p(y | x) is the test conditional distribution. Note Random is the traditional iid training set in machine learning. The test error of the low- and high-capacity GCM under Random teaching sets was estimated by generating 10,000 random teaching sets. Table 1 shows that Clump-Far outperforms other training sets for the low-capacity GCM. In contrast, Clump-Far, Clump-Near, and Spread are all optimal for high-capacity GCM, reflecting the fact that for high-capacity GCM the symmetry of the inner-most training item pair about the true decision boundary θ∗determines the learned model. Not surprisingly, Random teaching sets lead to suboptimal test errors on both low- and high-capacity GCM. Table 1: Loss (i.e. test error) for different teaching sets on low- and high-capacity GCM. Note the smallest loss 0.216 matches the optimal Bayes error rate. GCM Model Clump-Far Spread Clump-Near Random Low-capacity 0.245 0.261 0.397 M=0.332, SD=0.040 High-capacity 0.216 0.216 0.216 M=0.262, SD=0.066 In summary, we produced four kinds of teaching sets: (1) Clump-Far which is the optimal teaching set for the low-capacity GCM, (2) Spread, (3) Clump-Near, the three are all optimal teaching sets for the high-capacity GCM, and (4) Random. The next section discusses how human participants fair with each of these four training sets. Consistent with our predictions, the machine teacher’s choices idealized the training sets with parallels to the idealization procedures used in Gigu`ere and Love [3]. They found that human learners benefited when within category variance was reduced (akin to clumping in Clump-Far and Clump-Near), training items were shifted away from the category boundary (akin to Clump-Far), and feedback was idealized (as in all the machine teaching sets considered). Their actual condition in which training sets were not idealized resembles the Random condition here. As hoped, low-capacity and high-capacity GCM make radically different 5 −1.0 1.0 x y 0.00 0.25 0.50 0.75 1.00 z ˆp(y = 1 | z, D) 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 Clump-Far Spread Clump-Near Random Clump-Far Spread Clump-Near Random Figure 2: (A) The teaching sets. The points show the machine teaching sets. Overlapping training points are shown as clumps along with the number of items. A particular Random teaching set is shown. All training labels y were in {1, −1}, but dithered vertically for viewing clarity. (B) The predictive distribution ˆp(y = 1 | z, D) produced by the low-capacity GCM given a teaching set D. The vertical dashed lines show the position of the true decision boundary θ∗. The curves for the high-capacity GCM were omitted for space. predictions. Whereas high-capacity GCM is insensitive to variations across the machine teaching sets, low-capacity GCM should perform better under Clump-Far and Spread. The Clump-Near set leads to more errors in low-capacity GCM because items are confusable in memory and therefore limited samples from memory can lead to suboptimal classification decisions. In the next section, we evaluate how humans perform with these four training sets, and compare human performance to that of low- and high-capacity GCM. 4.3 Human Study Human participants were trained on one of the four training sets: Clump-Far, Spread, Clump-Near, and Random. Participants in all four conditions were tested (no corrective feedback provided) on the m = 60 grid test items z1 . . . zm in [0, 1]. Participants. US-based participants (N = 600) were recruited via Amazon Mechanical Turk, a paid online crowd-sourcing platform, which is an effective method for recruiting demographically diverse samples [19] and has been shown to yield results consistent with decision making studies in the laboratory [20]. In our sample, 297 of the 600 participants were female and the average age was 34.86. Participants were paid $1.00 for completing the study with the highest performing participant receiving a $20 bonus. Design. Participants were randomly assigned to one of the four teaching conditions (see Figure 2). Notice that feedback was deterministic in all the teaching sets provided by the machine teacher, but was probabilistic as a function of stimulus for the Random condition. For the Random condition, each participant received a different sample of training items. The test set always consisted of 60 stimuli (see Figure 1). In both training and test trials, stimuli were presented sequentially in a random order (without replacement) determined for each participant. Materials and Procedure. The stimuli were horizontal lines of various lengths. Participants learned to categorize these stimuli. The teaching sets values xi ∈[0, 1] were converted into pixels by multiplying it by 400 and adding an offset. The offset for each participant was a uniformly selected random number from 30 to 100. As the study was performed online (see below), screen size varied across participants (height ¯x=879.16, s=143.34 and width ¯x=1479.6, s=271.04). During the training phase, on every trial, participants were instructed to fixate on a small cross appearing in a random position on the screen. After 1000 ms, a line stimulus replaced the cross at the same position. Participants were then to indicate their category decision by pressing a key (“F” or “J”) as quickly as possible without sacrificing accuracy. Once the participant responded, the stimulus 6 Clump-Far Spread Clump-Near Random 0.00 0.25 0.50 0.75 0 3 6 9 Test inconsistency 0.00 0.25 0.50 0.75 Training performance Test performance Figure 3: Human experiment results. Each bar corresponds to one of the training conditions. (A) The proportion of agreement between the individual training responses with the Bayes classifier. (B) The proportion of agreement between the individual test responses with the Bayes classifier. (C) Inconsistency in individual test responses. The error bars are 95% confidence intervals. was immediately replaced by a feedback message (“Correct” or “Wrong”), which was displayed for 2000 ms. The screen coordinates (horizontal/vertical) defining the stimulus (i.e., fixation cross and line) position were randomized on each trial to prevent participants from using marks or smudges on the screen as an aid. Participants completed 20 training trials. The procedure was identical for test trials, except corrective feedback was not provided. Instead, “Thank You!” was displayed following a response. The test phase consisted of 60 trials. At the end of the test phase each subject was asked to discriminate between the short and long lines from the Clump-Near training set (i.e. x = 0.435 and x = 0.565, closest stimuli in the deterministically labeled training sets). Both lines were presented side-by-side, with their order counterbalanced between participants. Each participant was asked to indicate which one of those is longer. Results. It is important that people could perceptually discriminate the categories for the exemplars close to the boundary, especially for the Clump-Near condition in which all the exemplars are close to the boundary. At the end of the main study, this was measured by asking each participant to indicate the longer line between the two. Overall 97% participants correctly indicated the longer line. This did not differ across conditions, F(3, 596) < 0.84, p ≈0.47. The optimal (i.e. Bayes) classifier deterministically assigns correct class label ˆy = sign(x −θ∗) to an item x. The agreement between training responses and the optimal classifier were significantly different across the four teaching conditions, F(3, 596) = 66.97, p < 0.05. As expected, the random sets resulted in the lowest accuracy (M=65.2%) and the Clump-Far condition resulted in the highest accuracy (M=89.9%) (Figure 3A). Figure 3B shows how well the test responses agree with the Bayes classifier. The proportional agreement was significantly different across conditions, F(3, 596) = 9.16, p < 0.05. The Clump-Far and Spread conditions were significantly different from the Clump-Near condition, t(228.05) = 3.22, p < 0.05 and t(243.84) = 4.21, p < 0.05, respectively and the Random condition, t(290.84) = 2.39, p < 0.05 and t(297.37) = 3.71, p < 0.05, respectively. The Clump-Far and the Spread conditions did not differ, t(294.32) = 1.55, p ≈0.12. This result shows that the subjects in the Clump-Far and Spread conditions performed more similar to the Bayes classifier than the subjects in the other two conditions. Individual test response inconsistency can be calculated using number of neighboring stimuli that are categorized in opposite categories [3]. This measure of inconsistency attempts to quantify the stochastic memory retrieval and higher inconsistency reflects more noisy memory sampling. The inconsistency significantly differed between the conditions, F(3, 596) = 7.73, p < 0.05 (Figure 3C). Both Clump-Far and Spread teaching sets showed lower inconsistency, suggesting that those teaching sets lead to less noisy memory sampling. The inconsistencies for these two conditions did not differ significantly, two-sample t test, t(290.42) = 1.54, p ≈0.12. Inconsistencies in conditions Clump-Far and Spread significantly differed from Clump-Near, t(281.7) = −2.53, p < 0.05 and t(291.04) = −2.58, p < 0.05, respectively and Random, t(259.18) = −3.98, p < 0.05 and t(272.12) = −4.14, p < 0.05, respectively. We then calculated test loss for each subject as Pm i=1 (1 −p(hi | zi)) where hi is the response for the stimulus zi. Figure 4 compares the observed and estimated test performance (i.e. 1 −loss()) in four conditions. Overall, human performance is more closely followed by the low-capacity GCM. The human performance across four conditions was significantly different, F(3, 596) = 11.15, p < 0.05. The conditions Clump-Far and Spread did not significantly differ, t(295.96) = −0.8, p ≈ 7 0.50 0.75 Human Low-capacity High-capacity Performance Test Clump-Far Spread Clump-Near Random Figure 4: Empirical test performance of human learners for low- and high-capacity GCM on four teaching conditions. Test performance is measured as 1 −loss() (see (3)). Humans follow the low-capacity GCM more closely. The error bars are 95% confidence intervals. 0.42. Test performance in conditions Clump-Far and Spread significantly differed from ClumpNear condition, t(226.9) = 4.12, p < 0.05 and t(287.97) = 2.19, p < 0.05, respectively and Random condition, t(238.41) = 4.59, p < 0.05 and t(294.72) = 2.85, p < 0.05, respectively. Humans performed significantly worse in the Clump-Near condition than in the Random condition, t(253.94) = −2.394, p < 0.05. A similar pattern was observed for the low-capacity GCM while the opposite for the high-capacity GCM. Inconsistency, as defined above, significantly correlated with the test loss, Pearson’s r = 0.56, t(148) = 8.34, p < 0.05. Taken together, these results provide support for the low-capacity account of human decision making [3]. In order to check whether the variability within the training set is predictive of test performance we correlated the observed test loss with the estimated loss for the subjects in the Random condition. We observed a significant correlation between the test loss and the estimated loss for both low- and high-capacity models, Pearson’s r = 0.273, t(148) = 3.45, p < 0.05 and r = 0.203, t(148) = 2.52, p < 0.05, respectively. This result points out that due to their limited capacity human learners benefit from lower variability in the training sets, i.e. idealization. The individual median reaction time in the training phase significantly differed across teaching conditions, F(3, 596) = 10.66, p < 0.05. The training median reaction time for the Clump-Far condition was the shortest (M=761 ms, SD=223) and differed significantly from all other conditions, two-sample t tests, all p < 0.05. Other conditions did not differ significantly from each other. The individual median reaction times in the test phase (M=767 ms, SD=187) did not differ across teaching conditions, F(3, 596) = 0.95, p ≈0.42. Taken together, our results suggest that the recommendations of the machine teacher for the lowcapacity GCM are indeed effective for human learners. Furthermore, the observed lower inconsistency in this condition suggests that machine teacher is performing idealization which aids by reducing noise in the stochastic memory sampling process. 5 Discussion A major aim of cognitive science is to understand human learning and to improve learning performance. We devised an optimal teacher for human category learning, a fundamental problem in cognitive science. Based on recent research we focused on GCM which models limited human capacity of exemplar retrieval during decision making. We developed the optimal teaching sets for the low- and high-capacity variants of the GCM learner. By using a 1D category learning task, we have shown that the optimal teaching set for the low-capacity GCM is clumped, symmetrical and located far from the decision boundary, which is intuitively easy to learn. This provides a normative basis (given capacity limits) for the idealization procedures that reduce saliency of ambiguous cases [2, 3]. The optimal teaching set indeed proved effective for human learning. Future work will pursue several extensions. One interesting topic not considered here is how the order of training examples affects learning. One possibility is that the optimal teacher will recommend easy examples earlier in training and then gradually progress to harder cases [2, 21]. Another important extension is use of multi-dimensional stimuli. Acknowledgments The authors are thankful to the anonymous reviewers for their comments. This work is partly supported by the Leverhulme Trust grant RPG-2014-075 to BCL, National Science Foundation grant IIS-0953219 to XZ and WT-MIT fellowship 103811AIA to KRP. 8 References [1] P Shafto and N Goodman. A Bayesian Model of Pedagogical Reasoning. In AAAI Fall Symposium: Naturally-Inspired Artificial Intelligence’08, pages 101–102, 2008. [2] Y Bengio, J Louradour, R Collobert, and J Weston. Curriculum learning. In Proceedings of the 26th Annual International Conference on Machine Learning - ICML ’09, pages 1–8, New York, USA, June 2009. ACM Press. [3] G Gigu`ere and B C Love. Limits in decision making arise from limits in memory retrieval. Proceedings of the National Academy of Sciences of the United States of America, 110(19):7613–8, May 2013. [4] A N Hornsby and B C Love. Improved classification of mammograms following idealized training. Journal of Applied Research in Memory and Cognition, 3:72–76, 2014. [5] J Q Candela, M Sugiyama, A Schwaighofer, and N D Lawrence, editors. Dataset Shift in Machine Learning. MIT Press, first edit edition, 2009. [6] X Zhu. Machine Teaching for Bayesian Learners in the Exponential Family. In Advances in Neural Information Processing Systems, pages 1905–1913, 2013. [7] S A Goldman and M J Kearns. On the Complexity of Teaching. Journal of Computer and System Sciences, 50(1):20–31, 1995. [8] F Khan, X Zhu, and B Mutlu. How Do Humans Teach: On Curriculum Learning and Teaching Dimension. In Advances in Neural Information Processing Systems, pages 1449–1457, 2011. [9] F J Balbach and T Zeugmann. Recent Developments in Algorithmic Teaching. In A H Dediu, A M Ionescu, and C Mart´ın-Vide, editors, Language and Automata Theory and Applications, volume 5457 of Lecture Notes in Computer Science, pages 1–18. Springer, Berlin-Heidelberg, March 2009. [10] M Cakmak and M Lopes. Algorithmic and Human Teaching of Sequential Decision Tasks. In AAAI Conference on Artificial Intelligence (AAAI-12), July 2012. [11] R Lindsey, M Mozer, W J Huggins, and H Pashler. Optimizing Instructional Policies. In Advances in Neural Information Processing Systems, pages 2778–2786, 2013. [12] B C Love. Categorization. In K N Ochsner and S M Kosslyn, editors, Oxford Handbook of Cognitive Neuroscience, pages 342–358. Oxford University Press, 2013. [13] D L Medin and M M Schaffer. Context theory of classification learning. Psychological Review, 85(3):207–238, 1978. [14] R M Nosofsky. Attention, similarity, and the identification-categorization relationship. Journal of experimental psychology. General, 115(1):39–61, March 1986. [15] M L Mack, A R Preston, and B C Love. Decoding the brain’s algorithm for categorization from its neural implementation. Current Biology, 23:2023–2027, 2013. [16] Y Chen, E K Garcia, M R Gupta, A Rahimi, and L Cazzanti. Similarity-based Classification: Concepts and Algorithms. The Journal of Machine Learning Research, 10:747–776, December 2009. [17] F Jakel, B Scholkopf, and F A Wichmann. Does cognitive science need kernels? Trends in Cognitive Science, 13(9):381–388, 2009. [18] R M Nosofsky and T J Palmeri. An exemplar-based random walk model of speeded classification. Psychological review, 104(2):266–300, April 1997. [19] M Buhrmester, T Kwang, and S D Gosling. Amazon’s Mechanical Turk: A New Source of Inexpensive, Yet High-Quality, Data? Perspectives on Psychological Science, 6(1):3–5, February 2011. [20] M J C Crump, J V McDonnell, and T M Gureckis. Evaluating Amazon’s Mechanical Turk as a tool for experimental behavioral research. PloS one, 8(3):e57410, January 2013. [21] H Pashler and M C Mozer. When does fading enhance perceptual category learning? Journal of experimental psychology. Learning, memory, and cognition, 39(4):1162–73, July 2013. 9
|
2014
|
207
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.